Skip to main content

Advertisement

Log in

Assessment approaches in massive open online courses: Possibilities, challenges and future directions

  • Original Paper
  • Published:
International Review of Education Aims and scope Submit manuscript

Abstract

The development of massive open online courses (MOOCs) has launched an era of large-scale interactive participation in education. While massive open enrolment and the advances of learning technology are creating exciting potentials for lifelong learning in formal and informal ways, the implementation of efficient and effective assessment is still problematic. To ensure that genuine learning occurs, both assessments for learning (formative assessments), which evaluate students’ current progress, and assessments of learning (summative assessments), which record students’ cumulative progress, are needed. Providers’ more recent shift towards the granting of certificates and digital badges for course accomplishments also indicates the need for proper, secure and accurate assessment results to ensure accountability. This article examines possible assessment approaches that fit open online education from formative and summative assessment perspectives. The authors discuss the importance of, and challenges to, implementing assessments of MOOC learners’ progress for both purposes. Various formative and summative assessment approaches are then identified. The authors examine and analyse their respective advantages and disadvantages. They conclude that peer assessment is quite possibly the only universally applicable approach in massive open online education. They discuss the promises, practical and technical challenges, current developments in and recommendations for implementing peer assessment. They also suggest some possible future research directions.

Résumé

Méthodes d’évaluation dans les formations en ligne ouvertes à tous : possibilités, défis et futures orientations – L’essor des formations en ligne ouvertes à tous (FLOT) ouvre la voie à une ère de la participation interactive de masse à l’éducation. Tandis que l’inscription libre et massive ainsi que les avancées des technologies d’apprentissage créent des possibilités prometteuses pour l’apprentissage tant formel qu’informel tout au long de la vie, la réalisation d’une évaluation efficiente et efficace demeure un obstacle. Pour garantir un véritable apprentissage, il est nécessaire d’effectuer à la fois des évaluations pour l’apprentissage (évaluations formatives) qui mesurent les progrès actuels des apprenants, et les évaluations de l’apprentissage (évaluations sommatives) qui recensent les progrès cumulés des apprenants. La récente tendance des prestataires à attribuer des certificats et insignes numériques sanctionnant la réussite aux cours signale aussi la nécessité de résultats d’évaluation appropriés, sécurisés et précis qui garantissent la responsabilité. L’article examine les approches possibles d’évaluation qui correspondent à la formation en ligne ouverte à tous sous l’angle de l’évaluation formative et sommative. Les auteurs signalent l’importance et les défis d’évaluer les progrès des apprenants des FLOT dans ces deux buts. Ils identifient plusieurs approches d’évaluation formative et sommative en examinant et analysant leurs avantages et inconvénients respectifs. Ils concluent que l’évaluation entre pairs est fort probablement la seule approche universellement applicable dans la formation en ligne ouverte à tous. Ils en présentent les aspects prometteurs, les défis pratiques et techniques, l’évolution actuelle dans la réalisation de ce type d’évaluation ainsi que des recommandations. Ils proposent enfin plusieurs orientations possibles pour de futures études.

摘要

用于慕课的评估方法: 机会, 挑战及未来发展方向 – 慕课开启了大规模互动学习的新时代。 教育科技的进步为终身学习创造了很多机会, 但同时如何实现高效的学习评估也成为一个很大的挑战。 为了帮助学生学习, 形成性评估 (给学生提供阶段性反馈) 和总结性评估 (评估教学的最终效果) 都是必要的手段。 很多慕课开始向课程完成者颁发电子证书, 这一趋势也使得安全有效的评估变得尤为重要。 本文阐述了评估在慕课中的重要性和所面临的挑战, 并介绍了适用于慕课的形成性和总结性评估方法, 并对不同方法的优势和劣势进行了分析。 作者认为在慕课中, 学生互评是一个普遍适用的评估方法。 本文还对学生互评的优势、 挑战、 发展趋势以及实际应用中的问题进行了探讨, 最后提出了慕课评估方法未来的发展方向。

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. “The open educational resource (OER) movement has been growing rapidly since 2001, stimulated by funding from benefactors such as the Hewlett Foundation and UNESCO, and providing educational content freely to institutions and learners across the world” (Sclater 2009, p. 485).

  2. Signature Track is an “option that … give[s] students in select classes the opportunity to earn a Verified Certificate for completing their Coursera course” (Coursera 2013). It offers “Identity Verification …, Verified Certificates … and Shareable Course Records …, electronic course records [which students can share] with employers, educational institutions, or anyone else through a unique, secure URL” (ibid.).

  3. One example of this is a low-cost Online Master of Science in Computer Science (OMS CS) offered by the Georgia Institute of Technology. See https://www.omscs.gatech.edu/ [accessed 26 January 2018].

  4. The term “flipped” (often used in the expression “flipped classroom”) refers to an instructional strategy that inverses the traditional learning environment by delivering instructional content outside of the classroom (usually online) and bringing more activities into the classroom. See https://en.wikipedia.org/wiki/Flipped_classroom [accessed 5 February 2018].

  5. A “lurker” is someone who watches from the side lines without making his/her presence known.

  6. For recent figures, see https://www.class-central.com/report/mooc-stats-2017/ [accessed 5 February 2018].

  7. VUE stands for Virtual University Enterprises.

  8. For more details on what automated essay scoring (AES) can and cannot do, see Shermis et al. (2010).

  9. The term “ill-structured” does not mean that the question has been badly designed, but that it is one which allows for several correct answers and/or ways of arriving at a correct solution. It is this complexity which makes it unsuitable for automated assessment. By contrast, a well-structured question can only lead to one correct answer and is therefore machine-gradable.

  10. Ground truth assessment results are the results that are considered to be most reliable and accurate.

  11. In the context of data, noisiness refers to the characteristic of the data that include different types of errors, while sparsity refers to the fact that only limited information is available.

  12. Social loafing is a term used in social psychology; it refers to the phenomenon of a person contributing minimal effort to achieve a goal when he/she works in a group setting.

  13. Computerised-adaptive assessment (often also referred to as computerised-adaptive testing; CAT) is a computer-based test form that adapts to examinee’s ability levels. In CAT, an examinee next question is dependent on his/her responses to the previous questions.

  14. Authentic assessment is assessment that mirrors real-life problems, usually focusing on contextualised tasks.

References

  • Admiraal, W., Huisman, B., & Van de Ven, M. (2014). Self- and peer assessment in massive open online courses. International Journal of Higher Education, 3(3), 119–128.

    Article  Google Scholar 

  • Balfour, S. P. (2013). Assessing writing in MOOCs: Automated essay scoring and Calibrated Peer Review™. Journal of Research and Practice in Assessment, 8(1), 40–48.

    Google Scholar 

  • Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.

    Article  Google Scholar 

  • Bowen, W. G. (2013). Higher education in the digital age. Princeton, NJ: Princeton University Press.

    Book  Google Scholar 

  • Breslow, L., Pritchard, D. E., Deboer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edX’s first MOOC. Research and Practice in Assessment, 8(1), 13–25.

    Google Scholar 

  • Brindley, J., Blaschke, L. M., & Walti, C. (2009). Creating effective collaborative learning groups in an online environment. The International Review of Research in Open and Distributed Learning, 10(3). Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/675/1271.

  • Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Educational Technology and Society, 11(1), 132–147.

    Google Scholar 

  • Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121–152.

    Article  Google Scholar 

  • Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers and Education, 48(3), 409–426.

    Article  Google Scholar 

  • Chung, C. (2015). Futurelearn partners with Pearson VUE for proctored testing. Class Central (web post, 8 May). Retrieved 18 January 2018 from https://www.class-central.com/report/futurelearn-pearson-vue-proctored-testing/.

  • Coetzee, D., Fox, A., Hearst, M. A., & Hartmann, B. (2014). Should your MOOC forum use a reputation system? In Proceedings of the 17th ACM conference on computer-supported cooperative work and social computing (pp. 1176–1187). New York: Association for Computing Machinery (ACM) Press.

  • Coursera. (2013). Introducing signature track. Coursera (blog post 9 January). Retrieved 26 January 2018 from https://blog.coursera.org/signaturetrack/.

  • Daradoumis, T., Bassi, R., Xhafa, F., & Caballé, S. (2013). A review on massive e-learning (MOOC) design, delivery and assessment. In Proceedings of the eighth international conference on P2P, parallel, grid, cloud and internet computing (pp. 208–213). Compiègne: IEEE.

  • Dascalu, M.-I., Bodea, C.-N., Mihailescu, M. N., Tanase, E. A., & de Pablos, P. O. (2016). Educational recommender systems and their application in lifelong learning. Behaviour and Information Technology, 35(4), 290–297.

    Article  Google Scholar 

  • Dinevski, D., & Kokol, P. (2004). ICT and lifelong learning. European Journal of Open, Distance and E-Learning, 7(2), Article 136. Retrieved 18 January 2018 from http://www.eurodl.org/?p=archives&year=2004&halfyear=2&article=136.

  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322.

    Article  Google Scholar 

  • Foster, D., & Layman, H. (2013). Online proctoring systems compared. Retrieved 5 February 2018 from https://ivetriedthat.com/wp-content/uploads/2014/07/Caveon-Test-Security.pdf.

  • Garavalia, L., Olson, E., Russel, E., & Christensen, L. (2007). How do student cheat? In E. M. Anderman & T. B. Murdock (Eds.), Psychology of academic cheating (pp. 33–55). Burling, MA: Elsevier Academic.

    Chapter  Google Scholar 

  • Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133–148.

    Article  Google Scholar 

  • Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relationships among teaching, cognitive and social presence: Student perceptions of the community of inquiry framework. The Internet and Higher Education, 13(1), 31–36.

    Article  Google Scholar 

  • Gershon, R. C. (2005). Computer adaptive testing. Journal of Applied Measurement, 6(1), 109–127.

    Google Scholar 

  • Gielen, S., Dochy, F., Onghena, P., Struyven, K., & Smeets, S. (2011). Goals of peer assessment and their associated quality concepts. Studies in Higher Education, 36(6), 719–735.

    Article  Google Scholar 

  • Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers and Education, 57(4), 2333–2351.

    Article  Google Scholar 

  • Goldin, I. M. (2012). Accounting for peer reviewer bias with Bayesian models. In J. Kim & R. Kumar (Eds), Proceedings of the full-day workshop on intelligent support for learning groups at the 11th International conference on intelligent tutoring systems (ITS 2012) (pp. 27–34). Chania: (n. p.). Retrieved 31 January 2018 from https://sites.google.com/site/islg2012/.

  • Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on learning at scale (pp. 41–50). New York: Association for Computing Machinery (ACM) Press.

  • Harlen, W., & James, M. (1997). Assessment and learning: Differences and relationships between formative and summative assessment. Assessment in Education, 4(3), 365–379.

    Article  Google Scholar 

  • Hollands, F. M., & Tirthali, D. (2014). MOOCs: Expectations and reality. Full report. New York: Center for Benefit-Cost Studies of Education (CBCSE), Teachers College, Columbia University. Retrieved 18 January 2018 from http://cbcse.org/wordpress/wp-content/uploads/2014/05/MOOCs_Expectations_and_Reality.pdf.

  • Hwang, G. J., Hung, C. M., & Chen, N. S. (2014). Improving learning achievements, motivations and problem-solving skills through a peer assessment-based game development approach. Educational Technology Research and Development, 62(2), 129–145.

    Article  Google Scholar 

  • Jordan, K. (2014). Initial trends in enrolment and completion of massive open online courses. The International Review of Research in Open and Distributed Learning, 15(1), 133–160. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/1651.

  • Jordan, K. (2015). Massive open online course completion rates revisited: Assessment, length and attrition. The International Review of Research in Open and Distributed Learning, 16(3), 341–358. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/2112/3340.

  • Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4), 681–706.

    Article  Google Scholar 

  • Khalil, H., & Ebner, M. (2014). MOOCs completion rates and possible methods to improve retention: A literature review. In Proceedings of EdMedia. EdMedia: World conference on educational media and technology, 23 June Tampere, Finland (pp. 1305–1313). Waynesville, NC: Association for the Advancement of Computing in Education (AACE).

  • Kim, M. (2005). The effects of the assessor and assessee’s roles on preservice teachers’ metacognitive awareness, performance, and attitude in a technology-related design task. DPhil Dissertation. Electronic Theses, Treatises and Dissertations, Florida State University. Retrieved 18 January 2018 from http://purl.flvc.org/fsu/fd/FSU_migr_etd-3051.

  • Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing learner subpopulations in massive open online courses categories and subject descriptors. In Proceedings of the third international conference on learning analytics and knowledge (pp. 170–179). New York: Association for Computing Machinery (ACM) Press.

  • Koller, D., Ng, A., Do, C., & Chen, Z. (2013). Retention and intention in massive open online courses. Educause Review, 48(3), 62–63. Retrieved 18 January 2018 from http://www.educause.edu/ero/article/retention-and-intention-massive-open-online-courses.

  • Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., et al. (2015). Peer and self-assessment in massive online classes. In H. Plattner, C. Meinel, & L. Leifer (Eds.), Design thinking research: Building innovators (pp. 131–168). Cham: Springer.

    Google Scholar 

  • Lewin, T. (2012). Colorado State to offer credits for online class. The New York Times, 6 September. Retrieved 18 January 2018 from http://www.nytimes.com/2012/09/07/education/colorado-state-to-offer-credits-for-online-class.html?_r=0.

  • Luo, H., Robinson, A. C., & Park, J.-Y. (2014). Peer grading in a MOOC: Reliability, validity, and perceived effects. Journal of Asynchronous Learning Networks, 18(2), n2. Retrieved 18 January 2018 from http://onlinelearningconsortium.org/sites/default/files/429-2286-1-LE.pdf.

  • Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In L. Dirckinck-Holmfeld, V. Hodgson, C. Jones, M. de Laat, D. McConnell & T. Ryberg (Eds), Proceedings of the seventh international conference on networked learning (pp. 266–275). Lancaster: University of Lancaster.

  • Mak, B., & Coniam, D. (2008). Using wikis to enhance and develop writing skills among secondary school students in Hong Kong. System, 36(3), 437–455.

    Article  Google Scholar 

  • Mukta, G. (2015). Experimenting with open online office hours. edX Blog (blog post 21 July). Retrieved 18 January 2018 from http://blog.edx.org/experimenting-with-open-online-office-hours.

  • Newton, D. (2015). Cheating in online classes is now big business. The Atlantic, 4 November. Retrieved 18 January 2018 from http://www.theatlantic.com/education/archive/2015/11/cheating-through-online-courses/413770/.

  • Onah, D. F. O., Sinclair, J., & Boyatt, R. (2014). Dropout rates of massive open online courses: Behavioral patterns. In L. Gómez Chova, A. López Martínez & I. Candel Torres (Eds), Proceedings of the 6th international conference on education and new learning technologies (EDULEARN 14) (pp. 5825–5834). Valencia: International Academy of Technology, Education and Development (IATED). Retrieved 31 January 2018 from http://wrap.warwick.ac.uk/65543/1/WRAP_9770711-cs-070115-edulearn2014.pdf.

  • Pappano, L. (2012). The year of the MOOC. The New York Times, 2 November. Retrieved 18 January 2018 from http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace.html?pagewanted=all.

  • Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. In S. K. D’Mello, R. A. Calvo & A. Olney (Eds), Proceedings of the 6th international conference on educational data mining (EDM 2013) (pp. 153–160). Worcester, MA: International Educational Data Mining Society. Retrieved 31 January 2018 from http://www.educationaldatamining.org/EDM2013/proceedings/EDM2013Proceedings.pdf.

  • Raman, K., & Joachims, T. (2014). Methods for ordinal peer grading. In Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1037–1046). New York. Association for Computing Machinery (ACM) Press.

  • Ramesh, A., Goldwasser, D., Huang, B., Daumé Iii, H., & Getoor, L. (2014). Understanding MOOC discussion forums using seeded LDA. In Proceedings of the ninth workshop on innovative use of NLP for building educational applications (pp. 28–33). Baltimore, MD: Association for Computational Linguistics.

  • Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems handbook. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook (pp. 1–35). Boston: Springer.

    Chapter  Google Scholar 

  • Robinson, A. C., Kerski, J., Long, E. C., Luo, H., DiBiase, D., & Lee, A. (2015). Maps and the geospatial revolution: Teaching a massive open online course (MOOC) in geography. Journal of Geography in Higher Education, 39(1), 65–82.

    Article  Google Scholar 

  • Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27.

    Article  Google Scholar 

  • Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In J. Mestre & B. Ross (Eds.), Psychology of learning and motivation (pp. 1–36). Oxford: Elsevier.

    Google Scholar 

  • Sandeen, C. (2013a). Assessment’s place in the new MOOC world. Journal of Research and Practice in Assessment, 8(1), 5–12.

    Google Scholar 

  • Sandeen, C. (2013b). Integrating MOOCS into traditional higher education: The emerging “MOOC 3.0” era. Change: The Magazine of Higher Learning, 45(6), 34–39.

    Article  Google Scholar 

  • Sclater, N. (2009). The organizational impact of open educational resources. In U. D. Ehlers & D. Schneckenberg (Eds.), Changing cultures in higher education (pp. 485–497). Berlin: Springer.

    Google Scholar 

  • Sharples, M. (2000). The design of personal mobile technologies for lifelong learning. Computers and Education, 34(3–4), 177–193.

    Article  Google Scholar 

  • Sharples, M., Kirsop, L., & Kholmatova, A. (2016). Designing small group discussion for a MOOC platform. In Proceedings of the third conference on learning with MOOCS (LWMOOCs’16): Being and learning in a digital age (pp. 11–12). Philadelphia: University of Pennsylvania. Presentation slides retrieved 31 January 2018 from https://www.slideshare.net/sharplem/small-group-learning-for-a-mooc-pplatform.

  • Shermis, M.D., Burstein, J., Higgins, D., & Zechner, K. (2010). Automated essay scoring: Writing assessment and instruction. In E. Baker, B. McGaw & N. S. Petersen (Eds), International encyclopedia of education (3rd ed., pp. 75–80). Oxford: Elsevier. Retrieved 30 January 2018 from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.652.7014&rep=rep1&type=pdf.

  • Shohamy, E., Donitsa-Schmidt, S., & Ferman, I. (1996). Test impact revisited: Washback effect over time. Language Testing, 13(3), 298–317.

    Article  Google Scholar 

  • Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), Article no. 1. Retrieved 18 January 2018 from http://er.dut.ac.za/handle/123456789/69.

  • Siemens, G. (2013). Massive open online courses: Innovation in education? In R. McGreal, W. Kinuthia & S. Marshall (Eds), Open educational resources: Innovation, research and practice (pp. 5–15). Vancouver: Commonwealth of Learning and Athabasca University.

  • Skrypnyk, O., Joksimović, S., Kovanović, V., Gašević, D., & Dawson, S. (2015). Roles of course facilitators, learners, and technology in the flow of information of a cMOOC. The International Review of Research in Open and Distributed Learning, 16(3), 188–217. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/2170/3347.

  • Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002a). Peer assessment training in teacher education: Effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27(5), 443–454.

    Article  Google Scholar 

  • Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T. J. (2002b). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29(1), 23–42.

    Article  Google Scholar 

  • Smith, P. (2014). The coming era of personalized learning paths. Educause Review, 49(6). Retrieved 18 January 2018 from http://er.educause.edu/articles/2014/11/the-coming-era-of-personalized-learning-paths.

  • Soares, L. (2011). The “personalization” of higher education: Using technology to enhance the college experience. Center for American Progress (web post, 4 October). Retrieved 18 January 2018 from https://www.americanprogress.org/issues/labor/report/2011/10/04/10484/the-personalization-of-higher-education/.

  • Springer, L., Stanne, M. E., & Donovan, S. S. (1999). Effects of small-group learning on undergraduates in science, mathematics, engineering, and technology: A meta-analysis. Review of Educational Research, 69(1), 21–51.

    Article  Google Scholar 

  • Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press.

    Google Scholar 

  • Staubitz, T., Petrick, D., Bauer, M., Renz, J., & Meinel, C. (2016). Improving the peer assessment experience on MOOC platforms. In Proceedings of the third ACM conference on learning at scale (pp. 389–398). Edinburgh: Association for Computing Machinery (ACM) Press.

  • Suen, H. K. (2013). Role and current methods of peer assessment in massive open online courses (MOOCs). In Presented at the first international workshop on advanced learning sciences (IWALS), 21–22 October, University Park, PA.

  • Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(3), 312–327. Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/1680/2904.

  • Tomkin, J. H., & Charlevoix, D. (2014). Do professors matter? Using an a/b test to evaluate the impact of instructor involvement on MOOC student outcomes. In Proceedings of the first ACM conference on learning at scale (pp. 71–78). New York: Association for Computing Machinery (ACM) Press.

  • Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276.

    Article  Google Scholar 

  • Tsai, C.-C., Liu, E. Z.-F., Lin, S. S. J., & Yuan, S.-M. (2001). A networked peer assessment system based on a Vee heuristic. Innovations in Education and Teaching International, 38(3), 220–230.

    Article  Google Scholar 

  • Tu, C.-H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. American Journal of Distance Education, 16(3), 131–150.

    Article  Google Scholar 

  • Uto, M., & Ueno, M. (2016). Item response theory for peer assessment. IEEE Transactions on Learning Technologies, 9(2), 157–170. Retrieved 30 January 2018 from https://www.computer.org/csdl/trans/lt/2016/02/07243342.pdf.

  • van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educational Research Review, 4(1), 41–54.

    Article  Google Scholar 

  • van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: The role of interpersonal variables and conceptions. Learning and Instruction, 20(4), 280–290.

    Article  Google Scholar 

  • Veletsianos, G., & Shepherdson, P. (2015). Who studies MOOCs? Interdisciplinarity in MOOC research and its changes over time. The International Review of Research in Open and Distributed Learning, 16(3). Retrieved 18 January 2018 from http://www.irrodl.org/index.php/irrodl/article/view/2202/3348.

  • Walvoord, M. E., Hoefnagels, M. H., Gaffin, D. D., Chumchal, M. M., & Long, D. A. (2008). An analysis of Calibrated Peer Review (CPR) in a science lecture classroom. Journal of College Science Teaching, 37(4), 66–73.

    Google Scholar 

  • Webb, N. M., Troper, J. D., & Fall, R. (1995). Constructive activity and learning in collaborative small groups. Journal of Educational Psychology, 87(3), 406–423.

    Article  Google Scholar 

  • Xiong, Y., Goins, D., Suen, H. K., Pun, W. H., & Zang, X. (2014). A proposed credibility index (CI) in peer assessment. In Presented at the 76th annual meeting of the National Council on Measurement in Education (NCME), 2–6 April, Philadelphia, PA.

  • Xiong, Y., Li, H., Kornhaber, M. L., Suen, H. K., Pursel, B., & Goins, D. D. (2015). Examining the relations among student motivation, engagement, and retention in a MOOC: A structural equation modeling approach. Global Education Review, 2(3), 23–33.

    Google Scholar 

  • Yuan, L., & Powell, S. (2013). MOOCs and open education: Implications for higher education. Bolton: Centre for Educational Technology and Interoperability Standards (CETIS). Retrieved 18 January 2018 from http://publications.cetis.org.uk/wp-content/uploads/2013/03/MOOCs-and-Open-Education.pdf.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yao Xiong.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiong, Y., Suen, H.K. Assessment approaches in massive open online courses: Possibilities, challenges and future directions. Int Rev Educ 64, 241–263 (2018). https://doi.org/10.1007/s11159-018-9710-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11159-018-9710-5

Keywords

Navigation