MOOCs and the motivation problem

A massive open online course (MOOC) is an online educational environment in which a virtually unlimited number of learners can learn collaboratively without being physically present in the same place. The idea of MOOCs is based on three main components: peer-to-peer interactivity; openly accessible learning material; and technological equipment providing the necessary conditions for learners to process and share learning material.

Most MOOCs are grounded in a specific pedagogical model called connectivism. The term was coined by George Siemens (2005) and Stephen Downes (2005a). In a nutshell, it refers to the connectedness of learners through electronic networks, enabling them to engage in collaborative learning. Siemens summarises his eight principles of connectivism as follows:

  1. (1)

    Learning and knowledge rests in diversity of opinions.

  2. (2)

    Learning is a process of connecting specialized nodes or information sources.

  3. (3)

    Learning may reside in non-human appliances.

  4. (4)

    Capacity to know more is more critical than what is currently known.

  5. (5)

    Nurturing and maintaining connections is needed to facilitate continual learning.

  6. (6)

    Ability to see connections between fields, ideas, and concepts is a core skill.

  7. (7)

    Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.

  8. (8)

    Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of a shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate affecting the decision

    (Siemens 2005; numbering added).

By following these principles, connectivist pedagogy aims to support the most democratic form of education in order to support online, self-taught, lifelong learning. (While there may be some tensions between some of these principles, a proper balance is presumably achievable.)

In a blogpost a few years later, Downes (2010) also highlights four democratic principles of connectivism: autonomy, diversity, openness and interactivity. These principles warrant the best scenario for fruitful peer-to-peer connections, and for obvious reasons, connectivity is at the core of connectivist learning.

While the desirability of these principles is hardly questionable, putting them into practice seems to be problematic. It can be expected that an educational environment based on these principles attracts learners. This is, however, only half-true. As Rita Kop (2011) and Katy Jordan (2015) indicate, enrolment is often extremely high in MOOCs – but so is the dropout rate, with incompletion above 99 per cent, in many cases with a median value of 87.4 per cent (Jordan 2015). Most registered (but not-for-credit) learners are just “lurking”, merely watching and listening from the sidelines without contributing anything and thus only passively taking part in the collaboration. The vast majority give up the course after the first few weeks and even the active learners’ activity significantly decreases in time. It seems that connectivist principles attract enrolment but do not attract completion.

Hence, motivation is a central problem for MOOCs: an extreme proportion of registered learners usually do not finish MOOC-style courses. A solution is introducing motivation techniques into MOOCs. An increasingly popular way to do so is extending MOOCs with multimedia resources and summative (evaluative end-of-course) assessment:Footnote 1 multimedia intend to keep up interest during the course and final exams keep up interest in finishing the course. This is what Siemens (2013) terms extended MOOCs (xMOOCs), distinct from the original MOOCs based on interactivity as a central principle of connectivist pedagogy (cMOOCs). A third related category is quasi-MOOCs, or qMOOCs. They are asynchronous educational environments where only access to educational resources is provided and, hence, they are technically not courses but collections of learning material.

In the case of xMOOCs, the interactive, connectivist environment is extended primarily with video lectures and summative assessment. Their pedagogical framework emerged from motivational problems with early MOOCs. However, in effect these extensions make a backward step to e-learning 1.0 techniques.Footnote 2 A distinction between two types of knowledge networks (originated from Weller 2007) is helpful in understanding this problem:

[i]n e-learning, two major traditions have been prevalent: one where connections are made with people and the other where they are made with resources (Kop 2011, p. 19).

Establishing access to resources was a typical aim of e-learning 1.0. In e-learning 2.0, learner contribution and learner–learner interactivity came to the fore (Downes 2005b; Karrer 2006).

Due to technological limitations, video lectures are collaborative and interactive to an even lesser extent than traditional lectures: instant verbal and nonverbal formative feedback from learners to the lecturer is practically impossible. The “uploading documents”-style education reduces the educational values of online education, similarly to qMOOCs, to those of an online (video) library service: though uploading documents provides access to learning material, it does not support and monitor understanding by proper feedback. Whether documents contain multimedia elements (as in the case of xMOOCs) or not (as in the case of qMOOCs) is not relevant for human–human interactivity. However, it is that interactivity which is essential for MOOC-like educational environments building on the advantages of a massive number of participants. Furthermore, a lack of interactivity also disregards community-building aspects of education and it involves learners much less actively in the educational process.

Summative assessment, the other extension of xMOOCs, can ideally motivate learners to finish courses. But research has shown that completion rates are also extremely low in MOOCs providing certificates. Moreover, those learners who do persevere and obtain their certificates tend to be located in economically advanced countries (Reich and Ruipérez-Valiente 2018), the reason for this being that certificates are not included in the free course package. It seems that summative assessment alone is not a sufficient incentive for increasing completion rates. Hence, xMOOCs as an attempt to solve the motivational problem in cMOOCs are not particularly successful, and they are also not compatible with the original idea behind cMOOCs.

The central task of this article is to find a more viable alternative. In the course of my argument, I suggest that formative (rather than summative) assessment, supported by ideas borrowed from massively multiplayer online role-playing games (MMORPGs), may be a motivating educational tool that could make cMOOC completion rates more acceptable.

In a nutshell, users who engage (in some cases for a subscription fee) in MMORPGs connect and play on a global scale with a vast number of fellow players in a themed virtual world (e.g. fantasy or science fiction). They assume a character role and then participate in societal interaction with their fellow players, complete tasks, trade commodities etc. One of the main incentives for players is having control over the development of their character, which becomes measurable as they gather points in the game’s inbuilt character progression system. Some games also facilitate the inclusion of player-generated content. Even though the purpose of the game is being actively involved in building a virtual world, the characteristics of the progression system are similar to those of ongoing, low-stakes (formative) assessment for monitoring purposes in a learning enviroment, and player-generated content is the outcome of collaboration. Thus, MMORPGs and cMOOCs have some things in common, while at the same time, the former also offer some extensions to the latter. These (dis)similarities are discussed in more detail later in this article. Prior to that, I establish a general framework to explore their association in terms of motivation.

In order to make use of some elements of MMORPGs as a possible solution for the motivation problem with cMOOCs, five premises must be accepted:

  1. (1)

    formative assessment is motivational;

  2. (2)

    MMORPGs offer a suitable form of formative assessment in a sense relevant for motivation;

  3. (3)

    MMORPG ideas can be suitably applied to cMOOC-like educational environments;

  4. (4)

    formative assessment is in accordance with connectivist principles guiding educational activities in cMOOCs; and

  5. (5)

    possible problems that arise with formative assessment in cMOOCs can be solved so that the solution for the problem(s) does not generate other problems.

These premises will be discussed in the following sections in that order.

Motivation and formative assessment

Learner assessment is a central topic in the discussion about the educational perspectives MOOCs can offer (Mackness 2008; 2010; Peirano 2010; Mak 2011; Baxi 2012; Admiraal et al. 2014; Kulkarni et al. 2015). However, the claim that assessment is an effective means for motivating learners is controversial. Ken Masters argues that

[i]n a [c]MOOC, assessment does not drive learning; learners’ own goals drive learning (Masters 2011, p. 3; emphases added).

The main reason is that cMOOCs, as mentioned earlier, follow the most democratic form of education, based on the principles of autonomy, diversity, openness and interactivity (Downes 2010). For MOOCs to remain committed to these principles, learner autonomy must drive learning; fuelled by something called intrinsic rather than extrinsic motivation.Footnote 3 However, if course participants’ motivation to enrol in a course is their pursuit of their own goals, dropping out before completion must mean that their learning experience at the beginning of the course did not turn out to be sufficiently useful for reaching their persoal goal.

The idea of learner assessment seems to be in conflict with the four principles highlighted by Downes (2010). In order to increase autonomy, grounds for measuring learners’ knowledge must be decreased significantly. Diversity implies that to some (though certainly not all) questions, there are no automatic right-or-wrong answers, but that different perspectives can imply different “right” answers to the same question. Openness requires that MOOCs are accessible to everyone, without any entry requirements, and for the vast majority of learners, obtaining a final grade at the end of the course is not a purpose. Finally, interactivity makes it hard to identify what individual learners contribute to the learning outcome produced by the learning community. Assessing learning outcomes under these circumstances is difficult, if possible at all.

However, when assessment is discussed in the context of MOOCs, it typically occurs as summative assessment, especially in the form of taking final exams or tests (most explicitly considered in Mak 2012). Though recent tendencies have blurred the picture (Xiong and Suen 2018 discuss both kinds of assessment to some extent), much less has been said about formative assessment in the context of MOOCs and its relation to motivation. Since summative and formative assessment serve different purposes, possible effects of their application in MOOCs also differ potentially. It seems reasonable to complement the discussion of summative assessment by a discussion of formative assessment in MOOCs.

Formative, i.e. ongoing, assessment relates to motivation in three ways at least. First, it consists in providing feedback to learners during their learning process in order to support their development. The aim of feedback is to clarify weaknesses and possible ways of further improvement. Feedback helps learners to keep progressing, and progress is a motivating power. Setting up goals and then realising their achievement is one of the key factors in motivation (Malone and Lepper 1987).

Second, formative assessment helps the instructor revise educational purposes with respect to learner needs and progress. Via formative assessment, the instructor can focus on particular topics or methods within the course material in accordance with the needs, interests and/or strengths/weaknesses of learners. Changing the focus in accordance with learners' interests (i.e. matching the goals they have set out to achieve) is also an effective way of motivating them to continue their learning process.

Third, and most importantly for the present purposes, formative assessment is fully compatible in its characteristics with motivation by gamification.Footnote 4 Monitoring constant progress by achieving gradually higher scores and milestones are gamified elements that can serve as key factors in motivating learners (Dichev and Dicheva 2017; Szabó and Szemere 2016, 2017). This, along with the other two aspects, implies that introducing formative assessment into cMOOCs would support learner motivation, especially in a gamified form.

MMORPGs and motivation

Gamification is a relatively new trend aiming to motivate learners in (typically but not necessarily) online educational environments. Including game elements in education is a possible way for motivating learners extrinsically until they develop an intrinsic commitment to learning (Szabó and Szemere 2016, 2017). Due to their massively multi-participant character, MMORPGs can serve as a blueprint for a particular form of gamification in education that has the potential of being compatible with cMOOCs. MMORPGs, like games in general, are motivational mainly due to their visual elements, storyline, interactivity, immediate feedback, experience points and a levelling-up system (Caponetto et al. 2014; Nah et al. 2014). While visuality and storyline are presumably able to make external additions to education, interactivity is essential for cMOOCs, and immediate feedback, experience points and a levelling-up system can be included in formative assessment, providing an internal addition to the environment.

MMORPGs, just as MOOCs, are based on the idea that the quality of interaction can be dramatically increased by increasing the number of participants. Offline role-playing is often applied to education traditionally, associated with Jacob L. Moreno’s psychodrama and sociodrama (Moreno 1946). In Moreno’s version, patients were asked to play improvisatory roles in a fictitious situation in order to learn handling psychic and social problems (Blatner 2009). Theories of learning via video gaming have also become popular recently (Papert 1998a, b; Prensky 2001; Gee 2003; Aldrich 2009), including the idea of gamification in education.

In MMORPGs, the massive number of participants makes room for diversity, a central principle for connectivism. Participants are represented by “characters” or “avatars” they generate. Due to the role-playing idea, characteristics of these characters do not necessarily represent the characteristics of their owners, but they represent predefined roles, owner decisions and the owners’ progress in engaging with other participants as well as their skills development measured by their completion of quests.

Due to the rule-governed nature of human–human interactivities within the game, MMORPGs naturally create a platform for organised, pseudonymous discussions and hence they are extremely useful for educational activities based on the principle of collaboration, even in the absence of instructors (Steinkuehler 2004, 2006). Though a complete absence is not necessarily a purpose in running successful cMOOCs, it is essential in educational environments with an extremely low instructor–learner ratio to make the instructor’s task more manageable.

MMOPRGs as a framework for discussion seminars

A cMOOC educational environment can be best described as a virtual discussion seminar in which learners bring in their own background knowledge and any resources they find about the course topic in order to share and discuss them with their peers. This section aims to show briefly how ideas from the world of MMORPGs can contribute to such an environment by increasing motivational potential.

A full characterisation of MMORPGs and their relevance for education is beyond the scope of this article. Some of their features, like storytelling, visual elements or avatars that are relevant for motivation can also be distracting in education. Hence, exploiting these characteristics could be dangerous. Analysing how serious this danger is is also beyond the scope of this article. The present task is rather focusing on those characteristics whose application is likely to offer advantages (and advantages only) in online education.

At least three such characteristics of MMORPGs are directly relevant for motivation, and they are completely absent from MOOCs. Progress in MMORPGs is measured mainly in these terms and at least two of them can be seen as formative assessment tools. These characteristics are as follows:

  1. (1)

    character generation;

  2. (2)

    experience point and levelling-up system; and

  3. (3)

    interactive progress via user-defined goals.

(1) is motivational because of identity construction and trying out different roles; (2) is motivational because of constant progress monitoring and instant feedback; and (3) is motivational because of a high level of social interaction. These motivational factors are discussed below one by one.

(1) Character generation Playing a MMORPG begins with character generation. This is a process of allocating different attributes to the player’s character (see Patisup 2007 as an example); the process is analogous to filling in the registration form in a social media application. Social media profiles normally (though not necessarily) represent their owner. MMORPG characters do not represent their owners but the role the owner intends to “play out”. Role-playing is highly motivational because role-players can try out different roles without too much investment and a long-term commitment to those roles. Identity construction is a key factor in MMORPGs’ popularity (Lee and Hoadley 2006).

Character generation does not determine only the appearance of the avatar and the information provided about the user (in social media profiles), but the in-game skills and abilities of the character as well. This is done in accordance with user preference on the one hand, and a random generator (imitating a dice roll) to set up some measurable characteristics like the character’s level of “dexterity” or “wisdom”.

This structure thus allows an expression of preferences (e.g. learner’s interests or opinions relevant to the course) and an indication of the level of learner skills/experience regarding particular topics. The character is subject to constant development via activities within the game, just as learners are supposed to constantly develop in education. In this framework, the first task the learner is expected to carry out after entering the game (the course) is to set up a profile, committing her/himself to a set of characteristics according to predefined scales. Some characteristics can be chosen, others are predefined by the instructor, or are subject to the development of the character. In a discussion seminar, for example, learners are expected to take up and defend positions the instructor (or a random generator) assigns to them, and they can choose from possible topics and arguments that help them in defending their predefined position.

Pseudonymous profiles potentially increase diversity, exemption from prejudices (as in-game characteristics are clearly isolated from real-life characteristics), open-mindedness, and hence empathy towards others in different positions, situations and with different backgrounds. They also help learners understand strengths and weaknesses of theoretical positions they do not accept or consider seriously, but need to understand as they role-play virtual commitments, in order to progress further.

(2) Experience point and levelling-up system MMORPGs, as most games, monitor in-game progress by setting up challenges for the player, and if s/he succeeds, “experience points” can be received as a reward. Experience points are not only measurements of progress, but also a resource of novel opportunities: after reaching certain predefined amounts of experience points, players can “level up”. By levelling up, they gain new in-game skills by which they become able to successfully tackle more interesting and difficult, higher-level challenges. Short-term goals defined as challenges or quests always motivate activity: since the goal is more easily achievable than an abstract final aim of all activities together, and short-term success results in easily gained rewards, motivation can be maintained by a series of short-term feedback on success.

The system of experience points and levelling up (as well as its supplements like progress bars, leaderboards, scoreboards, etc.)Footnote 5 gives instant and constant feedback, and could even also be a means for grading. It is suitable for supplementing cMOOCs with a possible form of formative assessment and motivational resource due to constant progress monitoring and instant feedback. The levelling-up system could be adapted for recognising course completion (even without paid certificates), opening up in-game possibilities (e.g. higher-level learners could at some point become group leaders, teaching assistants, tutors, etc.). Levelling up can introduce a sort of hierarchy into the environment that is based on internal achievement. (The importance of this aspect will become apparent in a later section of this article, where hierarchical peer-to-peer assessment is introduced as a solution to the extremely low instructor–learner ratio in MOOCs.)

(3) Interactive progress via user-defined goals The main stage of MMORPGs offers potentially unlimited, open-ended directions to progress further. A high level of collaboration defines this phase, as quests often cannot be tackled by single characters with insufficient capabilities or one-sided specialisations. Hence, players are expected to form groups and attune their short-term purposes so that they can progress further. At this stage, MMOPRGs can be understood as very complex multiple choice questions: a massive amount of multiple choices in collaborative challenges with multiple directions to progress further, thereby determining the course of the plot. Each choice to be made is not about the right answer but about one of the available paths suitable for further progress (though within the particular quests, there can also be a place for right-or-wrong answers).

MMORPGs provide options from which participants can choose: quests, challenges that are set up by the instructor. In cMOOCs, learners are also supposed to set up their educational goals. However, due to the democratisation, learners in cMOOCs often lack guidance. If tasks and challenges guided them (rather than instructions where to progress further), democratic principles of connectivism would not be hurt and learners would have predefined options from which they could choose so that they could see that the direction they intend to follow is a viable one.

In a virtual classroom discussion, character generation and development should consist mainly in developing competence in relevant interpretive, systematising and argumentative skills as well as an understanding of the rules of discussion; having an understanding of the main problems of the field; having a landscape view of different positions regarding the matter and the main arguments for/against them; developing the learner’s own views; and relating those views to other positions in the light of possible arguments and counter-arguments at different levels. All of these skills can be constantly developed by formative assessment.

However, assessment was originally missing from early cMOOCs. One likely reason for this is that connectivism seems to be incompatible with assessment: as mentioned earlier, it seems to be in conflict with the four democratic principles formulated by Downes (2010). Though these worries have been addressed above, the idea of formative assessment also seems to be in conflict with the eight connectivist principles of Siemens (2005). The next section intends to harmonise connectivism with the idea of formative assessment by distributing assessment tasks among peers.

Formative assessment and connectivism

Insofar as cMOOCs are grounded in connectivist pedagogical principles, even if assessment could be helpful in overcoming motivational problems, it cannot be applied if it is incompatible with connectivism. Yao Xiong and Hoi K. Suen (2018) argue that an important difference between cMOOCs and xMOOCs is that the latter can build on assessment whereas the former cannot. But as indicated above, xMOOCs do so for the price of giving up connectivism (and also disregarding some important requirements of e-learning 2.0 in general). A more suitable solution would therefore be finding a way to balance the two ideas and providing an understanding of formative assessment that is compatible with connectivism.

There are (at least) four methods of assessment: teacher assessment, self-assessment, automatised assessment and peer assessment. Teacher assessment in MOOCs is impossible due to the low teacher–learner ratio: there is no teacher capacity to give formative feedback for everyone in a course with a massive number of participants.

Self-assessment is not suitable for connectivism, because the latter requires intersubjective interaction (sharing and discussing knowledge). This does not entirely exclude self-assessment from connectivist environments; it merely means that for a connectivist pedagogy, self-assessment must be supplemented with other forms of assessment.

Automatised assessment is applicable mainly to topics in which questions with predefined right-or-wrong answers are dominant, because this form of assessment is well-suited to being automatised. But connectivism is generally more suitable for discussing complex topics with no preliminary set right-or-wrong answers. Since connectivist learning is much more about exploring novel directions than memorising answers to questions, it is at least challenging to use automatised assessment effectively for measuring progress. The question here is not only whether this is even possible, but also whether it is cost-effective.

For similar reasons, Xiong and Suen (2018) argue that xMOOCs need to feature peer assessment. Because it involves peer-to-peer interactivity, peer assessment is also the most compatible with connectivist ideas compared to the other three assessment methods. Xiong and Suen argue that assessment is “difficult” in cMOOCs because generating and sharing knowledge, defined by Siemens (2005) as core activities in cMOOCs, are hard to assess. But an extended analysis of Siemens’s eight principles (ibid.) mentioned at the beginning of this article, can demonstrate that the incompatibility between connectivism and formative assessment is only superficial. Four of the eight tenets seem to be the most problematic: (1), (2), (4) and (8). They are discussed in some detail in the following paragraphs.

(1) Learning and knowledge rests in diversity of opinions implies that there is no universal set of right answers. While this may be not applicable to learning material about factual questions, connectivist pedagogy does imply that knowledge is produced rather than acquired by learners, and that learning is successful if it goes beyond the scope covered by predetermined learning material. For reasons mentioned above, peer assessment seems to be the only viable form of assessment in MOOCs. But what are peers expected to assess if (at least in some cases) there are no right-or-wrong answers? Rather than reproducing learning material, they can assess learner–learner connections, generating and sharing knowledge content, since these are the most important educational aims in a connectivist learning environment. It is not a problem if assessment does not reflect on progress in acquiring knowledge in terms of predefined expectations, because in this context, the main purpose of assessment is motivation rather than measuring learning outcome.

(2) Learning is a process of connecting specialised nodes or information sources implies that making connections among knowledge items is essential for connectivist learning. Whether a connection established between two items is “good” (and in what sense) is measurable by applying standard social media techniques (making “friendships”, commenting and “(dis)liking” others’ profiles and posts, etc.), by which learners acknowledge the contribution of their peers to knowledge generation. Sharing and commenting on content can also provide instant feedback and support for further development. Even if assessment in this form is less suitable for measuring the quality of content generated and shared, it is suitable for measuring its relevance in terms of peer needs and interests.

(4) Capacity to know more is more critical than what is currently known suggests that the knowledge already gained is of secondary importance to knowledge of which one is capable. But future knowledge is unmeasurable in advance, and what learners are capable of knowing in the future is also hard (if even possible) to measure. However, once again, this is problematic only if assessment measures learning outcome in terms of knowledge content gained and reproduced. This is precisely what Siemens’s fourth principle accords secondary importance to knowledge capability that can be increased by collaboration which is measurable as argued above.

Finally, from (8) Decision-making is itself a learning process, it follows that selection of learning material by the learner rather than the instructor is essential for connectivist learning. As a consequence, the learning material often consists of resources that are not selected by standards of relevance and quality. Even if measurability of the extent to which learners know the material were good, these measures would say nothing about the extent of knowledge they really gained but only about what portion of the unselected material they processed. That is why summative assessment hardly indicates anything important in cMOOCs. But what is measurable is how that material was discovered, processed and shared, and on what grounds it was found relevant, interesting and reliable.

These factors are also relevant for (4): if learners receive support and feedback on how to get to relevant, interesting and reliable resources, that makes them capable for generating future knowledge. By learning how to select resources, they presumably learn skills that are far more important for generating future knowledge than any particular piece(s) of knowledge that can be learned in a course, given that there is an enormous amount of easily accessible but unselected and often unreliable information on the Internet.

Hence, even if these four principles of connectivism make measuring learning outcome in connectivist learning environments like cMOOCs problematic, measurability of learning outcome in the traditional sense is not necessary for assessment. Given that connectivism is not a traditional educational theory either, the evaluation of its outcome may be grounded in connectivist rather than traditional measures. Insofar as an introduction of formative assessment to cMOOCs aims at increasing motivation rather than measuring learning outcome, measurability problems with learning outcomes are irrelevant.

In xMOOCs, as well as in traditional, “offline” education, learning outcome is mostly assessed by the depth of learners’ knowledge and understanding of predetermined content. In cMOOCs, learning outcome can be assessed by the diversity of shared knowledge material by evaluating the quality of peer-to-peer connections and the knowledge produced and shared via peer-to-peer activities. A connectivist reinterpretation of the role of formative assessment in the educational process does not exclude the possibility that formative assessment can also serve as in-progress feedback. On the contrary: peer feedback can be evaluated in terms of peer-to-peer connections, and hence providing feedback is also measurable in connectivist terms.

Distributed assessment

I have argued above that despite its disadvantages, peer assessment is the most viable possibility in cMOOCs. However, this does not preclude supplementing peer assessment with computer-driven methods like multiple choice questions (MCQs). The idea of MCQs does not necessarily reduce assessment to measuring learners’ acquired level of grasp of knowledge content. MCQs can be developed to a dialogic way of argumentation, “encourag[ing] students to explore different possible interpretations of a key [philosophical] passage” (MacDonald Ross 2008). “Writing [or thinking] in dialogue [while you read] means that you can imagine a character who is likely to disagree with the position you have come to, and speculate as to the sort of objections they might raise” (ibid.). In practical terms, MCQs are

linked to an important but ambiguous passage in the text. […] the passage referred to remains [visible on the learner’s screen] all the time. [Underneath], there is a set of very short web pages, with interlinks. These pages are at three distinct levels. At the top level, there is a list of possible interpretations of the passage. The student clicks on one of these interpretations, and is then given a second-level page, with a number of reasons for and against that interpretation. Finally, clicking on one of these reasons will bring up a third-level page, in which [the instructor] comment[s] on the validity or otherwise of that reason (ibid.).

The purpose of developing these dialogically conceptualised MCQs would not be to measure the learners’ knowledge in terms of right-or-wrong answers to questions with predetermined answers. Their responses to the latter type of questions can be assessed easily in any case (even free online MCQ applications like the game-based learning platform Kahoot! are suitable for assessing them – see kahoot.com). The real challenge is evaluating reponses to open-ended questions with no predetermined answers (e.g. essay-like tasks), and here’s where dialogic MCQs come in. They present a divergent line of arguments in which learners, in accordance with Siemens’s eighth principle, must make choices. Their choices determine the path they can follow on a complex “map” of views, interpretations and arguments, arriving at, maybe not fully individual but nonetheless very specific, points of view on a certain matter. Computers are not tools of evaluation, but serve as tools of navigation for these techniques. Hence, no complex programming or advanced artificial intelligence is required. The task of computers is only to establish a connection between the learner and the (logical, associative or moral/practical) consequences of her/his choices from predetermined alternatives. Both the content and the connections are humanly generated; what the computer does is logging learner movements and assigning humanly generated evaluation of progress and further possible choices to each step.

In a complex system of MCQs, levels of knowledge and preferences are interwoven. Knowledge level is determined by the number of choices one has made on the map. The very same choices also demonstrate a learner’s preferences in questions where judgment or opinion matters. This framework is therefore primarily suitable for deepening knowledge by following up the consequences of one’s preferences. It does not exclude right-or-wrong answers (there can be wrong answers leading to “dead ends” on the map), even if it does not necessarily require them either. The direct aim of making choices is establishing learner-to-content connections by following different routes, with an indirect aim of creating peer-to-peer connections with peers who make similar choices at different milestones of the pre-established route.

Regardless of the computer assistance, developing a series of multi-track MCQs also requires an extended human contribution. In order to ease overload, a distribution of human tasks is necessary, and a viable way of doing this is further distributing assessment tasks to (advanced) learners. This is not, however, a theoretically ad-hoc movement: insofar as establishing content–content, learner–content and learner–learner connections is essential for connectivist learning, the enlargement of MCQ maps by advanced learners can be seen as an integral part of their learning progress, not merely an extra task to be done for the sake of environment development that should normally be part of the instructor’s job.

MCQ maps provide a framework for establishing the conditions of getting entrance-level knowledge to the essential part of connectivist learning: collecting, sharing and discussing learning material in interest groups. Automatised assessment, along with peer feedback via comments and “(dis)likes” constitutes a score system that indicates the progress of learners.

By scorekeeping, the learner, the instructor and the community can monitor and assess the learning progress at individual and communal levels. There are different expectations set up for learners with different scores. In a virtual classroom discussion, beginners are expected to gain experience in understanding the basics of a discussion and the main problems of the field; competent learners are expected to defend their own views-in-progress; advanced learners are expected to be able to extensively argue for positions they disagree with; and expert learners (as well as instructors) are expected to evaluate arguments of others so that they can clarify controversial situations and make scorekeeping balanced in problematic cases.

Problematic cases may include the consequences of receiving “(dis)likes” due to factors other than contributing to generating knowledge, e.g. it is possible that learners may be exposed to cyber-bullying by their peers. However, the hierarchy of assessment warrants that peers giving scores are also assessed by higher-level peers (whose score-giving carries more weight than theirs), and a factor in their assessment is how they give scores. Peers giving incorrect scores on a regular basis will be marked down in the long term and thus forced to progress more slowly. Unless there is a general problem with fairness, minor problems can be eliminated on a case-by-case basis. In the case of failure in collective assessment standards, the instructor can intervene. This may mean adding extra tasks to her/his workload, but this would still be lower than if s/he is expected to do all the assessment personally.

In order to have the best evaluation and feedback, a public hierarchy based on knowledge levels is a necessary component of peer assessment. If scores are public, even the less experienced learners can estimate the quality of comments in the light of the scores of commenters. (In order to avoid ethical concerns, all public information about learners, including their scores, must be pseudonymised, as it is normally done in MMORPGs.) Scores indicate progress, and hence high scores attribute greater authority to learners in communal activities that can be also built into the score system by weighting the (dis)likes given by the scores of the (dis)likers.

Conclusion

This article argues that adapting motivational tools from MMORPGs for formative, peer-to-peer assessment may offer a solution to the motivation problem from which many cMOOCs suffer. While assessment initially seems to be incompatible with connectivism, the background idea of cMOOCs, I have sought to demonstrate that this incompatibility can be dissolved if (1) formative rather than summative assessment is applied; (2) peer assessment is introduced; and (3) educational purposes of assessment are set in accordance with connectivist principles, aiming to motivate learners rather than trying to measure learning outcome.

After introducing different forms of MOOCs and connectivism as the background pedagogy behind cMOOCs, the most progressive form of MOOCs, I suggested the idea of looking to MMORPGs as a way of gamifying MOOCs. I have argued that MMORPGs offer effective formative assessment tools that potentially increase motivation. Addressing the commonly held concern that assessment is incompatible with connectivism, I have shown that the former can be harmonised with the latter if assessment is understood in a connectivist spirit. Finally, the challenge that peer assessment can decrease the quality of assessment is addressed by having a hierarchical distribution of assessment tasks among senior learners.

Introducing MMORPG ideas and methods into cMOOC environments is certainly not the only possible way of increasing the efficiency of MOOCs. My intention in the above argumentation is simply to show that this approach is at least one addition worth making in furthering the unexplored possibilities of MOOCs based on connectivist principles.