Advertisement

Exploring the psychometric properties of the mind-map scoring rubric

  • Cheng Hua
  • Stefanie A. Wind
Original Paper

Abstract

Educational practitioners have used mind mapping as an instructional strategy for decades. Recently, researchers have explored the use of mind mapping in classroom settings, and observed that using this tool may benefit students’ conceptual understanding, as well as improve their creativity. Despite their widespread use, there is a lack of psychometrically sound assessment rubrics that researchers and practitioners can use to evaluate the quality of mind maps. To address this issue, we developed and evaluated a new scoring rubric, namely, mind-map-scoring rubric (MMSR) using a sample of 120 mind maps. By applying many-facet Rasch modeling, we observed that the MMSR demonstrated acceptable psychometric properties. We also observed that the level of difficulty of the aspect of the mind maps was not consistent across different educational levels. We discuss implications for research and practice in relation with the use of visual-learning tools.

Keywords

Mind map Scoring rubic Many-Facet rasch model Rater-Mediated assessment 

Notes

Compliance with ethical standards

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. Abi-El-Mona I, Adb-El-Khalick F (2008) The influence of mind mapping on eighth grader’s science achievement. School Sci Math 108(7):298–312CrossRefGoogle Scholar
  2. Akinoglu O, Yasar Z (2007) The effects of note taking in science education through the mind mapping technique on student’s attitudes, academic achievement and concept learning. J Baltic Sci Educ 6(3):34–43Google Scholar
  3. American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME) (2014) Standards for Educational and Psychological Testing. AERA, Washington, DCGoogle Scholar
  4. Andrich DA (1982) An index of person separation in latent trait theory, the traditional KR-20 indices and the Guttman scale response pattern. Educ Res Perspect 9:95–104Google Scholar
  5. Bond TG, Fox CM (2015) Applying the Rasch model: fundamental measurement in the human sciences, 3rd edn. Routledge, New YorkGoogle Scholar
  6. Buzan T (1996) The mind map book: how to use radiant thinking to maximize your brain’s untapped potential. Plume, New YorkGoogle Scholar
  7. Cain ME (2001/2002) Using mind maps to raise standards in literacy, improve confidence and encourage positive attitudes towards learning. Study conducted at Newchurch Community Primary School, Warrington. http://www.curee.co.uk/node/4793
  8. D’Antoni AV, Zipp GP (2006) Applications of the mind map learning technique in chiropractic education. J Chiropract Hum 13:2–11CrossRefGoogle Scholar
  9. D’Antoni AV, Zipp GP, Olsen VG (2009) Interrater reliability of the mind map assessment rubric in a cohort of medical students. BMC Med Educ 19(9):1–8Google Scholar
  10. Davies M (2011) Concept mapping, mind mapping and argument mapping: what are the differences and do they matter? High Educ 62(3):279–301CrossRefGoogle Scholar
  11. Engelhard G (1997) Constructing rater and task banks for performance assessments. J Outcome Measur 1(1):19–33Google Scholar
  12. Engelhard G (2013) Invariant measurement: Using Rasch models in the social, behavioral, and health sciences. Routledge, New York, NYGoogle Scholar
  13. Evrekli E, İnel D, Balım A (2010) Development of a scoring system to assess mind maps. Procedia - Social and Behavioral Sciences 2(2):2330–2334.  https://doi.org/10.1016/j.sbspro.210.03.331 CrossRefGoogle Scholar
  14. Farrand P, Hussain F, Hennessy E (2002) The efficacy of the ‘mind map’ study technique. Med Educ 36:426-431CrossRefGoogle Scholar
  15. Georgia Department of Education (2015) Writing assessments. Retrieved from http://www.gadoe.org/Curriculum-Instruction-and-Assessment/Assessment/Pages/Writing-Assessments.aspx
  16. Goodnough K, Long R (2002) Mind mapping: a graphic organizer for the pedagogical toolbox. Since Scope 25(8):20–24Google Scholar
  17. Goodnough K, Wood R (2002). Student and teacher perceptions of Mind Mapping: a middle school case study. In: Paper presented at the annual meeting of American Educational Research Association, New Orleans, LAGoogle Scholar
  18. Holland B, Holland L, Davies J (2004) An investigation into the concept of Mind Mapping and the use of Mind Mapping software to support and improve student academic performance. Learn Teach Projects 89–94Google Scholar
  19. Joreskog KG (2007) Factor analysis and its extensions. Factor analysis at 100: historical developments and future directions. Mahwah, L. Erlbaum Associates, pp 47–77Google Scholar
  20. Kulhavy RW, Stock WA, Kealy WA (1993) How geographic maps increase recall of instructional text. Educat Technol Res G- Develop 41:47–62CrossRefGoogle Scholar
  21. Linacre JM (1989) Many-faceted Rasch measurement. MESA Press, ChicagoGoogle Scholar
  22. Linacre JM (1998) Structure in Rasch residuals: why principal components analysis (PCA)? Rasch Measur Trans 12(2):636Google Scholar
  23. Linacre JM (2014) Facets computer program for many-facet Rasch measurement, Version 3.71.4. Winsteps.com, BeavertonGoogle Scholar
  24. Mento AJ, Martinelli P, Jones RM (1999) Mind Mapping in executive education: applications and outcomes. J Manage Dev 18(4):1–25CrossRefGoogle Scholar
  25. Mindjet (n. d.) https://www.mindjet.com/
  26. Miyake A, Shah P (1999) Models of working memory: mechanisms of active maintenance and executive control. Cambridge University Press, New YorkCrossRefGoogle Scholar
  27. Nassi I, Shneiderman B (1973) Flowchart techniques for structured programming. SIGPLAN Not 8(8):1–10CrossRefGoogle Scholar
  28. Paivio A (1990) Mental representations. A dual coding approach. Oxford University Press, New YorkCrossRefGoogle Scholar
  29. Patz RJ, Junker BW, Johnson MS, Mariano LT (2002) The hierarchical rater model for rated test items and its application to large-scale educational assessment data. J Educ Behav Stat 27(4):341–384CrossRefGoogle Scholar
  30. Paykoc F, Mengi B, Kamay PO, Onkol P, Ozgur B, Pillo O, Yildirim H (2004) What are the major curriculum issues?: The use of MindMapping as a brainstorming exercise. Paper presented at the first Int. Conference on Concept Mapping, SpainGoogle Scholar
  31. Rasch G (1960) Probabilistic models for some intelligence and attainment tests. Danish Institute for Educational Research, CopenhagenGoogle Scholar
  32. Reckase MD (1979) Unifactor latent trait models applied to multifactor tests: results and implications. J Educ Behav Stat 4(3):207–230.  https://doi.org/10.3102/10769986004003207 CrossRefGoogle Scholar
  33. Reed C, Rowe G (2007) A pluralist approach to argument diagramming. Law Probab Risk 6:59–85.  https://doi.org/10.1093/lpr/mgm030 CrossRefGoogle Scholar
  34. Robinson DH (1998) Graphic organizers as aids to text learning. Read Res Instruct 37:85–105CrossRefGoogle Scholar
  35. Samejima F (1969) Estimation of latent ability using a response pattern of graded scores. Psychom Monogr Suppl 34(2, no. 17)Google Scholar
  36. Samejima F (1997) Graded response model. In: van der Linden WJ, Hambleton RK (eds) Handbook of modern item response theory. Springer, New York, pp 139–152Google Scholar
  37. Smith RM (2004) Fit analysis in latent trait models. In: Smith EV, Smith RM (eds) Introduction to Rasch measurement. JAM Press, Maple Grove, pp 73–92Google Scholar
  38. Tucker JM, Armstrong GR, Massad VJ (2010) Profiling a Mind Map user: a descriptive appraisal. J Instruct Pedagogies 2:1–13Google Scholar
  39. Ueno M, Okamoto T (2008) Item response theory for peer assessment. IEEE Int Conf on Adv Learn Technol 554–558Google Scholar
  40. Urbaniak GC, Plous S (2013) Research randomizer (version 4.0) [computer software]. http://www.randomizer.org/Robinson
  41. Uto M, Ueno M (2016) Item response theory for peer assessment. IEEE Trans Learn Technol IEEE Comput Soc 9(2):157–170CrossRefGoogle Scholar
  42. Vekiri I (2002) What is the value of graphical displays in learning? Educ Psychol Rev 14(3):261–312.  https://doi.org/10.1023/A:101606442916 CrossRefGoogle Scholar
  43. Wright BD, Masters GN (1982) Rating scale analysis: Rasch measurement. Chicago, IL: MESA PressGoogle Scholar
  44. Weigmann DA, Dansereau DF, McCagg EC, Rewey KL, Pitre U (1992) Effects of knowledge map characteristics on information processing. Contemp Educ Psychol 17:136–155CrossRefGoogle Scholar
  45. Wickramisinghe A, Widanapathirana N, Kuruppu O, Liyanage I, Karunathilake I (2007) Effectiveness of mind maps as a learning tool for medical students. South East Asian J Med Educ 1(1):30–32Google Scholar
  46. Wind SA, Jones E (2017) The stabilizing influences of linking set size and model-data fit in sparse rater-mediated assessment networks. Educ Psychol Measur.  https://doi.org/10.1177/0013164417703733 Google Scholar
  47. Wu M, Adams RJ (2013) Properties of Rasch residual fit statistics. J Appl Measur 14(4):339–355Google Scholar
  48. XMind (2017) Lighten (Version 2.7.1) [Mobile application software]. http://lighten.xmind.net/
  49. Zampetakis LA, Tsironis L, Moustakis V (2007) Creativity development in engineering education: the case of Mind Mapping. J Manage Dev 26(4):370–380CrossRefGoogle Scholar

Copyright information

© The Behaviormetric Society 2018

Authors and Affiliations

  1. 1.Department of Educational Studies in Psychology, Research Methodology, and CounselingThe University of AlabamaTuscaloosaUSA

Personalised recommendations