Advertisement

The Assessment Agent System: design, development, and evaluation

  • Jianhua LiuEmail author
Development Article

Abstract

This article reports the design, development, and evaluation of an online software application for assessing students’ understanding of curricular content based on concept maps. This computer-based assessment program, called the Assessment Agent System, was designed by following an agent-oriented software design method. The Assessment Agent System is composed of five types of software agents: instructor agent, student agent, management agent, assessment agent, and reporting agent. Software agents in the system, through communication and cooperation, collectively provide the functionalities of user-system interaction, user management, task authoring and management, assessment delivery, task presentation, response collection, automatic assessment, and reporting. One-to-one evaluations and group evaluations were conducted to reveal students’ perceptions of the Assessment Agent System. Measures of visual clarity, system functionality, consistency, as well as error prevention and correction indicate that the Assessment Agent System is a useful tool for large-scale assessment based on concept maps. Through the process of design, development, and evaluation of the Assessment Agent System, this study demonstrates the agent-oriented approach for producing educational software applications. Furthermore, this research explored the concept map assessment method for the Assessment Agent System. When node terms and linking phrases are provided, the assessment of student concept maps can be conducted automatically by comparing student concept maps with the criterion concept map, proposition by proposition. However, the validity of the proposition-comparing method depends on the accuracy and thoroughness of the criterion propositions. Therefore, assessment criteria need to be continually refined and improved through the examination of student-created propositions.

Keywords

Computer-based assessment Automated assessment Concept map Assessment agent Software agent 

Notes

Acknowledgments

The author would like to thank Drs. Kenneth Potter, Barbara Lockee, Mike Moore, and Todd Ogle for their insightful advice on conducting this study, as well as Mr. Todd Bowden for his knowledgeable assistance in developing the Assessment Agent System.

References

  1. Bergenti, F. (2003). A discussion of two major benefits of using agents in software development. In P. Petta, R. Tolksdorf, & F. Zambonelli (Eds.), Engineering societies in the agents world III (pp. 1–12). Berlin: Springer-Verlag.CrossRefGoogle Scholar
  2. Bergenti, F., Gleizes, M.-P., & Zambonelli, F. (Eds.). (2004). Methodologies and software engineering for agent systems: The agent-oriented software engineering handbook. Boston: Kluwer Academic.Google Scholar
  3. Bull, J., & McKenna, C. (2004). Blueprint for computer-assisted assessment. New York: RoutledgeFalmer.CrossRefGoogle Scholar
  4. Burkhardt, H., & Pead, D. (2003). Computer-based assessment: A platform for better tests? In C. Richardson (Ed.), Whither assessment? (pp. 133–148). London: Qualifications and Curriculum Authority.Google Scholar
  5. Chung, G.K.W.K., Baker, E.L., Brill, D.G., Sinha, R., Saadat, F., & Bewley, W. L. (2006). Automated assessment of domain knowledge with online knowledge mapping (CSE Tech. Rep. No. 692). Retrieved from University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing website: http://www.cse.ucla.edu/products/reports/r692.pdf. Accessed 26 Apr 2012.
  6. Cline, B. E., Brewster, C. C., & Fell, R. D. (2010). A rule-based system for automatically evaluating student concept maps. Expert Systems with Applications, 37, 2282–2291. doi: 10.1016/j.eswa.2009.07.044.CrossRefGoogle Scholar
  7. d’Inverno, M., & Luck, M. (2004). Understanding agent systems (2nd ed.). Berlin: Springer-Verlag.CrossRefGoogle Scholar
  8. Edelson, D. C. (2002). Design research: What we learn when we engage in design. Journal of the Learning Sciences, 11, 105–121.CrossRefGoogle Scholar
  9. Fisher, K. M. (2000). SemNet software as an assessment tool. In J. J. Mintzes, J. H. Wandersee, & J. D. Novak (Eds.), Assessing science understanding: A human constructivist view (pp. 197–221). San Diego, CA: Academic Press.Google Scholar
  10. Gouli, E., Gogoulou, A., Papanikolaou, K., & Grigoriadou, M. (2004). COMPASS: An adaptive web-based concept map assessment tool. In A. J. Cañas, J. D. Novak, & F. M. González (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the first international conference on concept mapping. Retrieved from http://cmc.ihmc.us/papers/cmc2004-128.pdf. Accessed 26 Apr 2012.
  11. Grundspenkis, J. (2011). Concept map based intelligent knowledge assessment system: Experience of development and practical use. In D. Ifenthaler, P. Isaías, J. M. Spector, Kinshuk, & D. G. Sampson (Eds.), Multiple perspectives on problem solving and learning in the digital age (pp. 179–197). New York: Springer. doi: 10.1007/978-1-4419-7612-3_12.
  12. Harmes, J. C., & Parshall, C. G. (2000, November). An iterative process for computerized test development: Integrating usability methods. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee.Google Scholar
  13. Henderson-Sellers, B., & Giorgini, P. (Eds.). (2005). Agent-oriented methodologies. Hershey, PA: Idea Group Publishing.Google Scholar
  14. Herl, H. E., Baker, E. L., & Niemi, D. (1996). Construct validation of an approach to modeling cognitive structure of U.S. history knowledge. The Journal of Educational Research, 89, 206–218.CrossRefGoogle Scholar
  15. Herl, H. E., O’Neil, H. F, Jr, Chung, G. K. W. K., & Schacter, J. (1999). Reliability and validity of a computer-based knowledge mapping system to measure content understanding. Computers in Human Behavior, 15, 315–333. doi: 10.1016/S0747-5632(99)00026-6.CrossRefGoogle Scholar
  16. Higgins, C., Hegazy, T., Symeonidis, P., & Tsintsifas, A. (2003). The CourseMarker CBA system: Improvements over Ceilidh. Education and Information Technologies, 8, 287–304. doi: 10.1023/A:1026364126982.CrossRefGoogle Scholar
  17. Jennings, N. R. (2001). April). An agent-based approach for building complex software systems. Communications of the ACM, 44(4), 35–41. doi: 10.1145/367211.367250.CrossRefGoogle Scholar
  18. Liu, J. (2010). The assessment agent system: Assessing comprehensive understanding based on concept maps (Doctoral dissertation). Retrieved from http://scholar.lib.vt.edu/theses/. Accessed 26 Apr 2012.
  19. Luck, M., Ashri, R., & d’Inverno, M. (2004). Agent-based software development. Boston: Artech House.Google Scholar
  20. Luckie, D., Harrison, S. H., & Ebert-May, D. (2011). Model-based reasoning: Using visual tools to reveal student learning. Advances in Physiology Education, 35, 59–67. doi: 10.1152/advan.00016.2010.CrossRefGoogle Scholar
  21. McClure, J. R., Sonak, B., & Suen, H. K. (1999). Concept map assessment of classroom learning: Reliability, validity, and logistical practicality. Journal of Research in Science Teaching, 36, 475–492.CrossRefGoogle Scholar
  22. Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press.CrossRefGoogle Scholar
  23. Padgham, L., & Winikoff, M. (2004). Developing intelligent agent systems: A practical guide. Chichester, UK: John Wiley & Sons.CrossRefGoogle Scholar
  24. Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58, 3–18. doi: 10.1007/s11423-009-9119-8.CrossRefGoogle Scholar
  25. Richey, R. C., & Klein, J. D. (2007). Design and development research: Methods, strategies, and issues. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  26. Richey, R. C., Klein, J. D., & Nelson, W. A. (2004). Developmental research: Studies of instructional design and development. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (2nd ed., pp. 1099–1130). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  27. Ruiz-Primo, M. A. (2004). Examining concept maps as an assessment tool. In A. J. Cañas, J. D. Novak, & F. M. González (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the first international conference on concept mapping. Retrieved from http://cmc.ihmc.us/papers/cmc2004-036.pdf. Accessed 26 Apr 2012.
  28. Ruiz-Primo, M. A., Schultz, S. E., Li, M., & Shavelson, R. J. (2001). Comparison of the reliability and validity of scores from two concept-mapping techniques. Journal of Research in Science Teaching, 38, 260–278.CrossRefGoogle Scholar
  29. Ruiz-Primo, M. A., & Shavelson, R. J. (1996). Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching, 33, 569–600.CrossRefGoogle Scholar
  30. Sterling, L. S., & Taveter, K. (2009). The art of agent-oriented modeling. Cambridge, MA: MIT Press.Google Scholar
  31. Sycara, K. P. (1998). Multiagent systems. AI Magazine, 19(2), 79–92.Google Scholar
  32. Taricani, E. M., & Clariana, R. B. (2006). A technique for automatically scoring open-ended concept maps. Educational Technology Research and Development, 54, 65–82. doi: 10.1007/s11423-006-6497-z.CrossRefGoogle Scholar
  33. Tecuci, G., & Keeling, H. (1999). Developing an intelligent educational agent with disciple. International Journal of Artificial Intelligence in Education, 10, 221–237.Google Scholar
  34. van den Akker, J., Gravemeijer, K., McKenney, S., & Nieveen, N. (Eds.). (2006). Educational design research. New York: Routledge.Google Scholar
  35. Wooldridge, M. (2009). An introduction to multiagent systems (2nd ed.). Chichester, UK: John Wiley & Sons.Google Scholar
  36. Yin, Y., & Shavelson, R. J. (2008). Application of generalizability theory to concept map assessment research. Applied Measurement in Education, 21, 273–291. doi: 10.1080/08957340802161840.CrossRefGoogle Scholar
  37. Yin, Y., Vanides, J., Ruiz-Primo, M. A., Ayala, C. C., & Shavelson, R. J. (2005). Comparison of two concept-mapping techniques: Implications for scoring, interpretation, and use. Journal of Research in Science Teaching, 42, 166–184. doi: 10.1002/tea.20049.CrossRefGoogle Scholar

Copyright information

© Association for Educational Communications and Technology 2013

Authors and Affiliations

  1. 1.Department of Learning Sciences and Technologies, School of EducationVirginia Polytechnic Institute and State UniversityBlacksburgUSA
  2. 2.Center for Workplace Learning and Performance, Office of Human ResourcesThe Pennsylvania State UniversityUniversity ParkUSA

Personalised recommendations