Does audience matter? Comparing teachers’ and non-teachers’ application and perception of quality rubrics for evaluating Open Educational Resources
- 139 Downloads
While many rubrics have been developed to guide people in evaluating the quality of Open Educational Resources (OER), few studies have empirically investigated how different people apply and perceive such rubrics. This study examines how participants (22 teachers and 22 non-teachers) applied three quality rubrics (comprised of a total of 17 quality indicators) to evaluate 20 OER, and how they perceived the utility of these rubrics. Results showed that both teachers and non-teachers found some indicators more difficult to apply, and displayed different response styles on different indicators. In addition, teachers gave higher overall ratings to OER, but non-teachers’ ratings had generally higher agreement values. Regarding rubric perception, both groups perceived these rubrics as useful in helping them find high-quality OER, but differed in their preferences for quality rubrics and indicators.
KeywordsOpen Educational Resources Quality rubrics Audience Rubric perception Rubric application
This research is partially supported by Utah State University. Portions of this research were previously presented at the American Educational Research Association Annual Meeting (AERA 2016) in Washington, DC. We thank Drs. Anne Diekema and Andy Walker for their valuable input.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
- About TPT. (2016). https://www.teacherspayteachers.com/About-Us.
- Achieve. (2011). Rubrics for evaluating Open Education Resource (OER) objects. Washington, DC. http://www.achieve.org/publications/achieve-oer-rubrics.
- Andrade, H., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research and Evaluation, 10(5), 1–11.Google Scholar
- Atkins, D. E., Brown, J. S., & Hammond, A. L. (2007). A review of the Open Educational Resources (OER) movement: Achievements, challenges, and new opportunities. http://www.hewlett.org/uploads/files/ReviewoftheOERMovement.pdf.
- Bolton, F. C. (2006). Rubrics and adult learners: Andragogy and assessment. Assessment Update, 18(3), 5–6.Google Scholar
- Chismar, W. G., & Wiley-Patton, S. (2003). Does the extended technology acceptance model apply to physicians? In Proceedings of the 36th annual Hawaii international conference on system sciences (pp. 8–15). Big Island, HI: Institute of Electrical and Electronics Engineers (IEEE).Google Scholar
- Chong, A., & Romkey, L. (2017). Testing inter-rater reliability in rubrics for large scale undergraduate independent projects. In Proceedings of the Canadian Engineering Education Association, Halifax, NS.Google Scholar
- Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage.Google Scholar
- Custard, M., & Sumner, T. (2005). Using machine learning to support quality judgments. D-Lib Magazine, 11(10). http://www.dlib.org/dlib/october05/custard/10custard.html.
- Duncan, A. (2012). Arne Duncan introduces the why open education matters video competition. https://www.youtube.com/watch?v=8SdrhGrcvsk&feature=youtu.be.
- Fitzgerald, M., Lovin, V., & Branch, R. M. (2003). The gateway to educational materials: An evaluation of an online resource for teachers and an exploration of user behavior. Journal of Technology and Teacher Education, 11(1), 21–51.Google Scholar
- Gardner, R. C. (2001). Psychological statistics using SPSS for Windows. Upper Saddle River, NJ: Prentice Hall.Google Scholar
- Gligora Marković, M., Kliček, B., & Plantak Vukovac, D. (2014). The effects of multimedia learning materials quality on knowledge acquisition. In Proceedings of 23rd international conference on information systems development (pp. 140–149), Varaždin, Croatia.Google Scholar
- Harris, J., Grandgenett, N., & Hofer, M. J. (2010). Testing a TPACK-based technology integration assessment rubric. Society for Information Technology and Teacher Education.Google Scholar
- Haughey, M., & Muirhead, B. (2005). Evaluating learning objects for schools. E-Journal of Instructional Sciences and Technology, 8(1). http://ascilite.org.au/ajet/e-jist/docs/vol8_no1/fullpapers/eval_learnobjects_school.htm.
- King, J. (2016). U.S. Department of Education recognizes 14 states and 40 districts committing to #GoOpen with educational resources. http://www.ed.gov/news/press-releases/us-department-education-recognizes-13-states-and-40-districts-committing-goopen-educational-resources.
- Kurilovas, E., Bireniene, V., & Serikoviene, S. (2011). Methodology for evaluating quality and reusability of learning objects. Electronic Journal of e-Learning, 9(1), 39–51.Google Scholar
- Leary, H., Recker, M., Walker, A., Recker, M., Wetzler, P., Sumner, T., & Martin, J. (2011). Automating Open Educational Resources assessments: A machine learning generalization study. In Proceedings of the joint conference on digital libraries (pp. 283–286). New York: ACM.Google Scholar
- Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research and Evaluation, 7(10), 1–11.Google Scholar
- Nesbit, J., Belfer, K., & Leacock, T. (2007). Learning object review instrument (LORI), version 1.5. http://www.transplantedgoose.net/gradstudies/educ892/LORI1.5.pdf.
- Newell, J. A., Dahm, K. D., & Newell, H. L. (2002). Rubric development and inter-rater reliability issues in assessing learning outcomes. Chemical Engineering Education, 36(3), 212–215.Google Scholar
- Opfer, D., Kaufman, J., & Thompson, L. (2017). Implementation of K-12 state standards for mathematics and English language arts and literacy: Findings from the American Teacher Panel. Santa Monica, CA: RAND Corporation. https://www.rand.org/pubs/research_reports/RR1529-1.html.
- Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5, 411–419.Google Scholar
- Parkes, K. A. (2006). The effect of performance rubrics on college-level applied studio grading (Order No. 3277079). Available from ProQuest Dissertations and Theses Global (305310537). http://ezproxy.lib.utah.edu/docview/305310537?accountid=14677.
- Porcello, D., & Hsi, S. (2013). Crowdsourcing and curating online education resources. Science, 341, 240–241. http://www.sciencemag.org/content/341/6143/240.full.
- Saldaña, J. (2012). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage.Google Scholar
- Sumner, T., Khoo, M., Recker, M., & Marlino, M. (2003). Understanding educator perceptions of “Quality” in digital libraries. In Proceedings of the joint conference on digital libraries (pp. 269–279). New York: ACM.Google Scholar
- United Nations Educational, Scientific and Cultural Organization, UNESCO. (2016). Open Educational Resources. http://www.unesco.org/new/en/communication-and-information/access-to-knowledge/open-educational-resources/.
- Yuan, M., & Recker, M. (2015). Not all rubrics are equal: A review of rubrics for evaluating the quality of Open Educational Resources. International Review of Research in Open and Distance Learning, 16(5). http://www.irrodl.org/index.php/irrodl/article/view/2389.