Advertisement

Education and Information Technologies

, Volume 21, Issue 6, pp 1769–1784 | Cite as

A design science research methodology for developing a computer-aided assessment approach using method marking concept

  • Hussein Genemo
  • Shah Jahan MiahEmail author
  • Alasdair McAndrew
Article

Abstract

Assessment has been defined as an authentic method that plays an important role in evaluating students’ learning attitude in acquiring lifelong knowledge. Traditional methods of assessment including the Computer-Aided Assessment (CAA) for mathematics show limited ability to assess students’ full work unless multi-step questions are sub-divided into sub questions. This issue persisted significant drawback especially within the notion of method marking approach. To address this issue, the aim of the study is to develop a methodological framework that will create an information and communications technology (ICT) artefact prototype. The prototype (termed as method marking assessment (MMA) artefact) implements a method-marking assessment concept to assess through multi-step questions. Extensive literature reviews have revealed that there are features in common between complex-problem solution characteristics and multi-steps questions assessment using ICT; therefore complex problems paradigm is used in the study for developing the MMA prototype.

Keywords

Computer aided assessment methods Design science research Expert systems Method marking 

References

  1. Angeli, C. (2010). Diagnostic expert systems: from expert’s knowledge to real—time systems. Advanced Knowledge Based Systems: Model, Applications & Search, 1, 50–73.Google Scholar
  2. Ashton, H. S., Beevers, C. E., Korabinski, A. A., & Youngson, M. A. (2006). Incorporating partial credit in computer-aided assessment of mathematics in secondary education. British Journal of Educational Technology, 37(1), 93–119. doi: 10.1111/j.1467-8535.2005.00512.x.CrossRefGoogle Scholar
  3. Balcombe, A., Brennan, M. & Everiss, L. (2011). Formative Learning: A Description of an Assessment Framework in Initial Teacher Education. In: M. Hodis & S. Kaiser (Eds.), Proceedings of the Symposium on Assessment and Learner Outcomes, Victoria University, (pp. 25–40). Wellington, New Zealand.Google Scholar
  4. Bayazit, N. (2004). Investigating design: a review of forty years of design research. Design Issues, 20(1), 16–29.CrossRefGoogle Scholar
  5. Beevers, C., Wild, D., McGuire, G., Fiddes, D., & Youngson, M. (1999). Issues of partial credit in mathematical assessment by computer. Research in Learning Technology, 7(1), 26–32.Google Scholar
  6. Black, P., & Wiliam, D. (2005). Lessons from around the world: how policies, politics and cultures constrain and afford assessment practices. Curriculum Journal, 16(2), 249–261. doi: 10.1080/09585170500136218.CrossRefGoogle Scholar
  7. Bloxham, S., & Boyd, P. (2007). Developing Effective Assessment In Higher Education: A Practical Guide: A Practical Guide, McGraw-Hill Education, UK.Google Scholar
  8. Borgman, C. L. (2012). The conundrum of sharing research data. Journal of the American Society for Information Science and Technology, 63(6), 1059–1078.CrossRefGoogle Scholar
  9. Boyd, N. S. (2010). Domain vocabulary. Retrieved 2013/12/19, from http://educery.com/educe/patterns/domain-vocabulary.html.
  10. Chatterjee, S. (2010). Design research in information systems: theory and practice (Vol. 22). Springer.Google Scholar
  11. Creswell, J. (2003). Research design: Qualitative, quantitative, and mixed methods approaches. Boston: SAGE Publications.Google Scholar
  12. Cross, N. (1984). Developments in design methodology. Chichester: John Wiley & Sons Ltd.Google Scholar
  13. Davis, D. (2007). Complex software problem solving by means of abstractive techniques. Paper presented at the Proceedings of the 11th international conference on Computer aided systems theory, Las Palmas de Gran Canaria, Spain.Google Scholar
  14. de Jong, T. (2014). Emerging representation technologies for problem solving. In: Spector, J. M., Merrill, M. D., Elen, J. & Bishop, M. J. (Eds.), Handbook of Research on Educational Communications and Technology (pp. 809–816). New York: Springer.Google Scholar
  15. Faste, T., & Faste, H. (2012). Demystifying “design research”: design is not research, research is design, IDSA Education Symposium.Google Scholar
  16. Fischer, A., Greiff, S., & Funke, J. (2012). The process of solving complex problems. The Journal of Problem Solving, 4(1), 19–42. doi: 10.7771/1932-6246.1118.
  17. Funke, J. (2010). Complex problem solving: a case for complex cognition? Gov’t review. Cognitive Processing, 11(2), 133–142.Google Scholar
  18. Gears, C. (2005). Classroom assessment: minute by minute, day by day. Assessment, 63(3), 19–24.Google Scholar
  19. Grosan, C., & Abraham, A. (2011). Rule-based expert systems. Intelligent Systems. Springer.Google Scholar
  20. Hevner, A. R. (2007). A three cycle view of design science research. Scandinavian journal of information systems, 19(2), 87–92.Google Scholar
  21. Hevner, A., & Chatterjee, S. (2010). Design research in information systems: Theory and practice. New York: Springer Science & Business Media.CrossRefGoogle Scholar
  22. Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.Google Scholar
  23. Hwang, G.-J., Chen, C.-Y., Tsai, P.-S., & Tsai, C.-C. (2011). An expert system for improving web-based problem-solving ability of students. Expert Systems with Applications, 38, 8664–8672.CrossRefGoogle Scholar
  24. Jaques, P. A., Seffrin, H., Rubi, G., Morais, F. D., Guilardi, C., Bittencourt, I. I., & Isotani, S. (2013). Rule-based expert systems to support step-by-step guidance in algebraic problem solving: the case of the tutor PAT2Math. Expert Systems with Applications, 40(14), 5456–5465.CrossRefGoogle Scholar
  25. Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: a research paradigm whose time has come. Educational Researcher, 33(7), 14–26.CrossRefGoogle Scholar
  26. Jones, I. S. (2008). Computer-aided assessment questions in engineering mathematics usingMapleTA®. International Journal of Mathematical Education in Science and Technology, 39(3), 341–356. doi: 10.1080/00207390701734523.CrossRefGoogle Scholar
  27. Kaidar, S. M., Hussain, R. I., Bohani, F. A., Sahran, S., Binti Zainuddin, N., Ismail, F., Thanabalan, J., Kalimuthu, G., & Abdullah, S. N. H. S. (2013). Brain tumor treatment advisory system. Soft Computing Applications and Intelligent Systems. Springer.Google Scholar
  28. Kamel Boulos, M. N. (2012). Expert system shells for rapid clinical decision support module development: an ESTA demonstration of a simple rule-based system for the diagnosis of vaginal discharge. Healthcare Information Research, 18, 252–258.CrossRefGoogle Scholar
  29. Khan, F. S., Razzaq, S., Irfan, K., Maqbool, F., Farid, A., Illahi, I., & Ul Amin, T. (2008). Wheat: A Web-based expert system for diagnosis of diseases and pests in Pakistani wheat. Proceedings of the World Congress on Engineering, Oct 2008, 2–4.Google Scholar
  30. Kirsh, D. (2010). Thinking with external representations. AI & SOCIETY, 25(4), 441–454. doi: 10.1007/s00146-010-0272-8.CrossRefGoogle Scholar
  31. Lawson, D. (2012). Computer-aided assessment in mathematics: panacea or propaganda? International Journal of Innovation in Science and Mathematics Education (formerly CAL-laborate International), 9(1), 6–12.Google Scholar
  32. Livne, N. L., Livne, O. E., & Wight, C. A. (2007). Can automated scoring surpass hand grading of students’ constructed responses and error patterns in mathematics. MERLOT Journal of Online Learning and Teaching, 3(3), 295–306.Google Scholar
  33. March, S. T., & Smith, G. F. (1995). Design and natural science research on information technology. Decision Support Systems, 15(4), 251–266.CrossRefGoogle Scholar
  34. Miah, S. J., Kerr, D., & Gammack, J. (2009). A methodology to allow rural extension professionals to build target specific expert systems for Australian rural business operators. Expert Systems with Applications, 36(1), 735–744.CrossRefGoogle Scholar
  35. Miah, S. J., Kerr, D., & Von-Hellens, L. (2014). A collective artefact design of decision support systems: design science research perspective. Information Technology & People, 27(3), 259–279.CrossRefGoogle Scholar
  36. Offermann, P., Blom, S., Schönherr, M., & Bub, U. (2010). Artifact types in information systems design science–a literature review. In: Winter, R., Zhao, J. L. & Aier, S. (Eds.), Global perspectives on design science research (pp. 77–92). Springer Berlin Heidelberg.Google Scholar
  37. Österle, H., Becker, J., Frank, U., Hess, T., Karagiannis, D., Krcmar, H., et al. (2011). Memorandum on design-oriented information systems research. European Journal of Information Systems, 20(1), 7–10.CrossRefGoogle Scholar
  38. Palm, T. (2008). Performance assessment and authentic assessment: a conceptual analysis of the literature. Practical Assessment, Research & Evaluation, 13(4), 1–11.Google Scholar
  39. Passmore, T., Brookshaw, L., & Butler, H. (2011). A flexible, extensible online testing system for mathematics. Australasian Journal of Educational Technology, 27(6), 896–906.CrossRefGoogle Scholar
  40. Pereira, R., Almeida, R., & da Silva, M. M. (2013). How to generalize an information technology case study. In: Vom Brocke, J., Hekkala, R., Ram, S. & Rossi, M. (Eds.), Design science at the intersection of physical and virtual design (pp. 150–164). Springer Berlin Heidelberg.Google Scholar
  41. Price, M., Carroll, J., O’Donovan, B., & Rust, C. (2011). If I was going there I wouldn’t start from here: a critical commentary on current assessment practice. Assessment & Evaluation in Higher Education, 36(4), 479–492.CrossRefGoogle Scholar
  42. Sangwin, C. (2012). Computer aided assessment of mathematics using STACK. Paper presented at the Proceedings of ICME.Google Scholar
  43. Sangwin, C., Cazes, C., Lee, A., & Wong, K. L. (2010). Micro-level automatic assessment supported by digital technologies. Hoyles, C. & Lagrange, J.-B. (Eds.), Mathematics Education and Technology-Rethinking the Terrain, (pp. 227-250). Springer USA. 13, 227–250.Google Scholar
  44. Shepard, L. A. (2005). Linking formative assessment to scaffolding. Educational Leadership, 63(3), 66–70.Google Scholar
  45. Sonnleitner, P., Keller, U., Martin, R., & Brunner, M. (2013). Students’ complex problem-solving abilities: their structure and relations to reasoning ability and educational success. Intelligence, 41(5), 289–305. doi: 10.1016/j.intell.2013.05.002.CrossRefGoogle Scholar
  46. Vaishnavi, V. K., & Kuechler Jr, W. (2007). Design science research methods and patterns: innovating information and communication technology. CRC Press.Google Scholar
  47. Van den Broeck, J., Brestoff, J. R., & Chhagan, M. (2013). Maintaining data integrity. In: Van den Broeck, J. & Brestoff, J. R. (Eds.), Epidemiology: principles and practical guidelines (pp. 379–388). UK: Springer.Google Scholar
  48. Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14. doi: 10.1016/j.stueduc.2011.03.001.CrossRefGoogle Scholar
  49. Wüstenberg, S., Greiff, S., & Funke, J. (2012). Complex problem solving—more than reasoning? Intelligence, 40(1), 1–14. doi: 10.1016/j.intell.2011.11.003.CrossRefGoogle Scholar
  50. Yeo, R. K., & Marquardt, M. J. (2012). Complex problem solving through action learning: implications for human resource development. International Journal of Human Resources Development and Management, 12(4), 258–273.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Hussein Genemo
    • 1
  • Shah Jahan Miah
    • 2
    Email author
  • Alasdair McAndrew
    • 1
  1. 1.College of Engineering and SciencesVictoria UniversityMelbourneAustralia
  2. 2.College of BusinessVictoria UniversityMelbourneAustralia

Personalised recommendations