Introduction

In 2017, the Israeli Ministry for Social Equality issued a call for proposals for the development of science and engineering academic massive open online courses (MOOCs). This was part of a national digital program designed to initiate and promote digital innovations in higher education and to adapt the education system to the digital age. Specifically, the call was to develop a MOOC on edXFootnote 1—an open learning digital platform.

Driven by this challenging call, the research group responded by offering a conversion of the undergraduate course titled Model-Based Systems Engineering with Object-Process Methodology (MBSE with OPM in short) to a MOOC format. This 3.5 credit point course was designed for undergraduate B.Sc. Information Systems Engineering and Industrial Engineering and Management students at a research university. Throughout this design process, we envisioned the students taking the course via the cloud-based learning environment engaged in meaningful learning by such means as increasing learner engagement and using learning by doing assignments, and a variety of learning materials in various modalities to suit the various learning styles that different students exhibit. In addition to investigating the pedagogical aspects, we studied the technological environment of MOOCs and the pedagogical and technological tools they offer for increasing learner engagement and facilitating learning by doing.

Table 1 The Felder and Silverman Learning Style Model (based on Felder & Silverman, 1988)
Table 2 Correspondence between learning object type and learning styles (adapted from El-Bishouty et al., 2019)
Table 3 Criteria for evaluating the pedagogical usability of digital learning materials (Nokelainen, 2006)
Table 4 Assignments distribution by task type and question type
Table 5 Learning style themes, categories, and examples as reflected in participants’ reference to the MORTIF-type problems
Table 6 Means and standard deviations of students’ preference level by question type

The traditional course format included in-person lectures and recitations. Back in 2012, we had already started incorporating into the course a project-based learning (PBL) component, described in previous papers (Wengrowicz et al., 2018), in order to practice conceptual modeling of systems, which plays a central role in system engineering (Lavi et al., 2020; Mordecai et al., 2022). One of the two conceptual modeling methodologies and languages students used in this course was Object-Process Methodology—OPM (Dori, 2016) ISO 19450 (https://www.iso.org/standard/62274.html), which is the focus of the MOOC described herein and considered the most frequently used MBSE methodology (Dong et al., 2022).

A key challenge in designing a course for teaching MBSE in a cloud-based environment is that such courses typically include listening, reading, watching, and answering multiple-choice questions, but students do not get to practice building conceptual models of systems, let alone receiving meaningful feedback on their performance. The research group study was intended to fill the challenging gap of developing and testing a technological and pedagogical capability that enables practicing the construction of conceptual models.

With this in mind, we identified the critical missing pedagogical and technological component for the emerging course: actual hands-on modeling, accompanied by feedback. This has led to a key decision to develop a new type of component to be embedded in the MOOC environment. We call this new component MORTIF—Modeling with Real-Time Informative Feedback. MORTIF allows students to visually construct conceptual models of systems and receive meaningful automated feedback and grading. In this paper, we examine the pedagogical usability of MORTIF and its contribution to students taking the MBSE with OPM MOOC. The innovation and main contribution of this research lies in the design of MORTIF as a new cloud-based active learning component for modeling systems and understanding its pedagogical contribution for different types of learners.

Literature Review

We begin the literature review with a brief introduction to MOOCs. Next, we provide an overview of key pedagogical concepts within the context of MOOCs, such as learning styles, learner engagement, learning by doing, and formative feedback. We then discuss pedagogical usability aspects and close with an overview of the MOOC and its unique features.

MOOCs: Massive Open Online Courses

Enrollment in MOOCs increased dramatically in 2020—a year of worldwide campus shutdowns and mass social distancing due to the COVID-19 pandemic, with 180 million learners enrolled worldwide—a whopping 50% increase compared with the 120 million enrolled in 2019 (https://www.classcentral.com/report/mooc-stats-2020/). In 2020, the two leading platforms in terms of the number of MOOCs on offer and the number of learners in MOOCs were Coursera with 4600 courses and 76 million learners, and edX with 3100 courses and 35 million learners. The most well-known type of MOOC, termed xMOOC, is linear, content-based, and centralized in one institution and often one instructor. xMOOCs typically focus on a set of short video mini-lectures, followed by automated, multiple-choice questions that test learners’ content remembering and understanding (Margaryan et al., 2015). The number of microcredentials—two to four xMOOCs bundled around one skill offered online—also increased substantially, from 820 in 2019 to 1178 in 2020 (Dhawal, 2021). Formally, our MOOC is organized as two 5-week mini-courses, two xMOOCs, which make-up a professional certificate program. This is a type of microcredential—one of 385 offered on the edX platform in 2020. Serendipitously, we launched the MOOC on the edX platform in the spring 2019 semester, so our entrance into the COVID-19 era in winter 2020 semester could be neither smoother nor timelier. In what follows, we survey xMOOC characteristics.

Key Pedagogical Concepts

Learning Styles

As the first word in MOOC, massive, indicates, this kind of course is aimed at teaching a very large number of students. Therefore, it should be designed with a wide diversity of learners in mind, including the myriad of ways in which learners approach the learning process. Considering this, we surveyed the literature on learning styles before designing our MOOCs. The term learning styles refers to ways of study or instruction which are most effective for specific individual learners (Pashler et al., 2008). It implicitly assumes no one style has a sweepingly objective preference over another. Rather, a match between learning style or styles and the individual learner is what facilitates the achievement of optimal learning outcomes (Riener & Willingham, 2010). A learning style does not have to be fixed for an individual learner and may depend on the learner’s level of experience or expertise (Yuan et al., 2018).

Rather than rating learners across the same dimensions, proponents of theories on learning styles classify learners by distinct types (Pashler et al., 2008). The most common classification concerns body senses, such as visual, auditory, and kinesthetic (Riener & Willingham, 2010). Individual learning styles can differ between in-person and online learning environments (El-Bishouty et al., 2019; Garland & Martin, 2005).

Coffield et al. (2004), who carried out an extensive literature review concerning learning styles, classified models of learning styles into five classes, or families: (a) constitutionally based (fixed) learning styles and preferences, (b) cognitive structure, (c) stable personality type, (d) “flexibly stable” learning preferences, and (e) learning approaches and strategies. As this classification shows, some theorists view learning styles as innate and/or fixed, while others view them as malleable to some degree.The concept of learning styles does have its detractors. Both the lack of studies which test the interaction effect of instructional method and learning style on learning outcomes (Pashler et al., 2008) and the lack of valid and reliable measures of learning styles (Coffield et al., 2004) have been pointed out by scholars as major drawbacks that current learning style research suffers from. However, recent studies in machine learning, which analyze enormous amounts of learner-generated data from MOOCs, are adding validity to the notion of learning styles (e.g., Mishra et al., 2021). Regardless of where one stands in the learning styles debate, providing personalized instruction that is suitable for each individual learner, independently of learning style, remains of paramount importance. This tenet has guided our MOOC design, and as we show in the sequel, was indeed a major factor that contributed to the course success.The Felder and Silverman Learning Style Model (FSLSM), proposed by Felder and Silverman (1988) specifically for engineering education, characterizes learners by dimensions rather than fixed groups. It therefore belongs to the flexibly stable learning preferences family of theories. The model includes five dimensions of learning preference and five corresponding dimensions of instructional style, where each dimension contains two opposite values. Table 1 summarizes the tenets of the FSLSM.

El-Bishouty et al. (2019) applied FSLSM to the design and improvement of undergraduate computer science MOOCs. They developed a tool for making recommendations to instructors regarding which instructional strategies or techniques match with which learning style or styles. As part of their effort, they matched learning styles to learning objects in MOOCs. As Table 2 shows, they did not include the “organization” dimension in their analysis, so we matched this dimension’s styles to learning object types ourselves.

FSLSM has also been applied to MOOC learners, including the prediction of learning styles according to data generated by learners as they engage with and contribute data to the course platform (Hmedna et al., 2020; Mishra et al., 2021) and using a questionnaire to ascertain individual learning styles (Mishra et al., 2021).

Learner Engagement

Newmann defined student (learner) engagement as “the student’s psychological investment in and effort directed toward learning, understanding, or mastering the knowledge, skills, or crafts that academic work is intended to promote” (1992, p. 12). Behavioral, cognitive, and affective engagement of students in their learning activities largely determines learning outcomes, student–teacher interactions, class atmosphere, and satisfaction (Dori et al., 2020; Halverson & Graham, 2019; Lei et al., 2018).

Learning in a MOOC environment lacks some of the social elements of in-person learning, which may lead to low learning engagement (Shernoff et al., 2014; Waugh & Su-Searle, 2014), and may, in turn, lead learners to drop out of the course. Perhaps surprisingly, studies have shown that for many students, course completion is not the goal of enrolling in MOOCs. Other reasons for joining a MOOC may include (a) wanting to learn a new topic, (b) extend current knowledge, (c) curiosity, (d) facing a personal challenge, or (e) collecting a professional or academic certificate (Barak et al., 2016; Breslow et al., 2013; Hew et al., 2018; Wang & Baker, 2018; Watted & Barak, 2018).

Researchers (Carroll et al., 2021; Northey et al., 2015; Ornelles et al., 2019; Roll et al., 2021) have recommend several factors for increasing learner engagement, including personalizing the learning materials, using a social networking site, facilitating accessibility of learning materials via mobile devices, and creating problem-based contexts.

Learning by Doing

Given the principles of learning engagement outlined above, experiential learning (Kolb, 1984, 2014) seems to be a suitable approach. Experiential learning is based on the tenet that the root of learning lies in experiencing—an interaction between people and their environments. According to the experiential learning approach, abstract thinking results from concrete experience. An instructional approach similar in meaning and purpose to experiential learning is learning by doing. This approach involves learning by application and by trial and error, both directed by the instructor (Anido et al., 2001; Anzai & Simon, 1979). Learning by doing has long been touted as the appropriate pedagogical approach for teaching engineering (Carlson & Sullivan, 1999) and has more recently been implemented in MOOC design (Alario-Hoyos et al., 2018). Thus, implementing this approach in MOOCs presents technological as well as design challenges, especially given the large number of simultaneous learners in some courses (Alario-Hoyos et al., 2018). Dealing with the content and capabilities of developing new cloud-based learning by doing MOOC component requires to consider its technological usability and equally important is to consider its pedagogical usability—the aspect on which this study focuses.

Formative Feedback

Feedback is information given to students about their performance or understanding and considered one of the most powerful factors influencing learning in various instructional contexts (Barana et al., 2021; Hattie & Timperley, 2007; Narciss, 2013). Computer-based application and digital learning material should provide the student with encouraging and immediate feedback that helps understand the problematic parts in their learning (Nokelainen, 2006). Hattie and Timperley (2007) argue that effective and affective feedback reduces the gap between current and desired learning performance and should help students decide what activities are needed to improve their learning outcome. Barana et al. (2021), who emphasized the importance of formative feedback, claimed that formative feedback on each step of the solution is perceived as more useful than a summative feedback on the final solution only. We have implemented the feedback on each of the steps at once in our MORTIF component as recommended. It suited the characteristics and the complexity of the modeling problems.

Pedagogical Usability

Pedagogical usability helps to indicate whether the use of a digital learning component supports learners according to the learning objectives. This term is sensitive to students’ pedagogical needs and helps to consider pedagogical issues while designing or evaluating the use of digital learning component (Moore et al., 2014; Nokelainen, 2006; Zurita et al., 2019).

Zurita et al. (2019) point out that it is an important characteristic of applications that support learning as it relates to the added value students perceive while using it. Nokelainen (2006) explains that pedagogical usability is a sub-concept of utility which depends on the goals set for a learning situation by both the student and the teacher. Moore et al. (2014) explain that pedagogical usability can help to outline an in-depth evaluation of a learning component and its learning outcome.

Nokelainen (2006) suggested ten criteria, listed and explained in Table 3, including learner control, learner activity, added value, and flexibility, which can help evaluate the pedagogical usability of digital learning materials. Based on these criteria, Nokelainen (2006) developed the Pedagogical Meaningful Learning Questionnaire (PMLQ), which measures elementary school students’ subjective perceptions. Moore et al. (2014) argued it is impossible to use the same pedagogical usability evaluation for all the courses and their learning materials due to the inherent differences between them.

Systems Engineering and Conceptual Modeling with OPM

The International Council on Systems Engineering (INCOSE) defines Systems Engineering as “a transdisciplinary and integrative approach to enable the successful realization, use, and retirement of engineered systems, using systems principles and concepts, and scientific, technological, and management methods” (INCOSE, 2021a). In its SE Vision 2035 document, INCOSE highlighted Model-Based Systems Engineering (MBSE) as the methodology of choice for systems engineering (INCOSE, 2021b). In MBSE, modeling principles, methods, languages, and tools are formally applied to the lifecycle of complex systems (Ramos et al., 2012). Conceptual modeling of systems is a core activity of MBSE, and it is also the focus of the xMOOC described in this paper. Since conceptual modeling is a non-tangible activity, teaching it via learning by doing presents a particular challenge.

In systems engineering, a concept is an abstract representation which maps system function to system structure and behavior (Cameron et al., 2016). The process of system representation in MBSE produces conceptual models, enabling explicit, shared representation of system architecture (Ramos et al., 2012). Conceptual models are constructed using a formal graphical language, and distinguish between different concepts and interrelationships (Dori, 2016).

Object Process Methodology (OPM), ISO 19450 (Dori, 2016), is a language and methodology for conceptual modeling of systems (Dori et al., 2019). It can support an end-to-end system lifecycle, from concept to detailed design, operation, and finally retirement. OPM uses a minimal universal ontology that includes two kinds of things (conceptual building blocks): objects and processes. An object is a thing that exists or might exist physically or informatically. A process transforms one or more objects by creating or consuming them, or by changing their state. Relations among things can be structural—between objects or between processes, or procedural—between an object and a process. As an OPM model is built, its graphical representation in Object-Process Diagrams (OPDs) is coupled with an equivalent natural language description in Object-Process Language (OPL), a subset of English (or any other natural language), which helps the modeler to validate the model and the model reader to understand the model. OPDs and their associated OPLs are combined in a hierarchy, where lower-level OPDs refine higher OPDs. OPM modeling can be done using OPCloud, a collaborative web-based environment (Kohen & Dori, 2021). OPCloud was designed to support the creation and editing of OPM models with correct-by-construction architecture and implementation. OPCloud provides for collaboratively creating and managing models and supports automatic generation of OPL. Modelers create and manage their models through a web browser.

Research Purpose and Questions

The purpose of this study was to examine the pedagogical usability of the MORTIF component. To achieve this purpose, we investigated the following research questions:

  • (RQ1) Does MORTIF support the pedagogical usability aspects of learner control, relying on previous knowledge, and effective formative feedback?

  • (RQ2) What is the added value of MORTIF compared with the other types of assignments in the course?

  • (RQ3) What learning style characteristics are reflected in participants’ reference to MORTIF components?

Method

Research Participants

The research included 295 participants, 61% men and 39% women, who signed up for our two short xMOOC courses as verified users and could therefore access the graded tasks. Their ages ranged from 20 to 49 years, with an average of 27.85 (STD = 6.76) years.

Setting

The MOOC platform enables bundling several video clips and practicing tasks of various kinds within a unit and several units within a section. The two 5-week courses included 10 sections, 75 units, 70 video clips, and 196 practicing assignments. The assignments students had to perform were at three levels: unit summary tasks, section summary tasks, and final tasks. These assignments included six question types: checkbox, multiple-choice, dropdown, drag and drop, image map, and MORTIF. Table 4 presents the distribution of assignments by task type (unit, section, final) and question type, which are ordered from simple and easy to complex and difficult.

The first five question types utilized components that were already built into the MOOC platform. MORTIF uses a dedicated server to enable real-time model checking, grading, and meaningful textual feedback, including detailed information on missing and superfluous model facts (see Fig. 1). The assessment module evaluates the participant’s submitted model based on a “textbook solution.” The student is presented with immediate detailed feedback and ability to submit a corrected model. After the first submission, the student can correct and resubmit the model. The grade for each submission is saved by the edX platform. As Table 3 shows, the final tasks comprised only MORTIF-type questions, section tasks included 44% such questions, and the simpler unit tasks included only 2% MORTIF-type questions. During the course, we collected data on participants’ problem-solving performance, and at the end of the course, they were asked to provide feedback through an anonymous online questionnaire.

Fig. 1
figure 1

A screenshot of MORTIF, showing how the submitting student receives a grade with detailed information on missing and superfluous model facts

OPCloud and the edX Learning Tool Interoperability Protocol

Our MBSE xMOOC is an edX Professional Certificate Program that comprises two short courses. Providing learners with learning by doing opportunities is a distinct challenge for any xMOOC that does not afford interaction with instructors, let alone a course on conceptual modeling, which is a hands-on activity. The edX platform offers course developers an online design studio, in which the course is authored and structured into a series of lessons organized in sections and units (Gilbert, 2015). This platform enables bundling video clips and assignments of various kinds, such as multiple-choice questions, but it does not support conceptual modeling tasks. To enhance the edX capabilities, we developed MORTIF as a new type of learning component which uses the edX LTI (Learning Tool Interoperability) (Aleven et al., 2017; IMS, 2022; Massa, 2014) feature via a Web Service module of OPCloud. LTI enables integrating third-party tools—in our case OPCloud—into the edX course learning flow. Assuming a web server represents the integrated tool, LTI defines a three-way protocol: the student’s Web session, the edX server, and the integrated tool server (OPCloud LTI service). MORTIF enables the learner to perform modeling operations in OPCloud directly from the edX environment, as shown in Fig. 1. It enables modeling a system, submitting the model, and receiving detailed real-time formative, actionable feedback. For the formative feedback, the student’s submitted visual model generates text in OPL which is compared to the predefined solution model’s OPL. Two feedback parts are then created: a list of missing model facts and a list of redundant facts in the student’s model.

Analysis Methodology and Tools

We employed a mixed-methods research methodology (Creswell & Creswell, 2018): Quantitative data included responses to an online questionnaire with close-ended questions, and MORTIF data was collected by the OPCloud-edX server. Qualitative data emanated from the questionnaire’s open-ended questions. We analyzed the quantitative data using descriptive and inferential statistical procedures, while for the content analysis, we identified learning style categories, classified them into themes, and calculated the categories’ distribution (Boréus & Bergström, 2017).

MORTIF Task Level Measuring

Dividing the MORTIF assignments into three groups based on their task type (unit, section, or final), which indicates the level of assignment difficulty, we collected objective information from the server usability logs on the number of submissions of each type (1, 2, or 3) assuming the more complex the task, the greater the number of submissions this task needs. This measurement illuminates three aspects of pedagogical usability: learner control regarding breaking the MORTIF assignments down into structured meaningful units, previous knowledge as expressed by the increasing difficulty level, and feedback, which is more required the more complex the problem is. Cronbach’s alpha for internal consistency between the number of submissions in each task type (unit, section, or final) was 0.87, 0.93, and 0.89, respectively.

Students’ Perceived Contribution Measuring

To obtain subjective data regarding the same three pedagogical usability aspects extracted from the end-of-course online questionnaire, we asked the participants to grade on a 1–10 scale the extent to which each course element contributed to their proficiency with the course content. They were also asked to justify their grading for each element. Exploratory factor analysis revealed two factors—groups of course elements based on their contribution level: learning by doing active elements, such as MORTIF tasks, and passive elements, such as the glossary. These two factors explained 62% of the contribution level variance. Internal consistency between all the learning by doing active element items was high (Cronbach’s alpha = 0.83).

Student Question Type Preference Measuring

The added value aspect of MORTIF was extracted also from the end-of-course online questionnaire. We asked the participants to select and explain their preferred question types by grading on a 0–5 scale (0-not at all to 5-much more) the extent to which we should continue to incorporate each type and justify their grading. Exploratory factor analysis revealed two factors of question-type preference: visual-based questions, such as MORTIF assignments, and textual-based questions, such as multiple-choice ones. These two factors explained 59% of the question type preference level variance. Cronbach’s alpha internal consistency between all the visual-based and all the textual-based items was 0.75 and 0.70, respectively.

Student Learning Style Preference

Participants were asked to explain their grading for each close-ended question in the first and second online questionnaire sections. In addition, in the last section of the end-of-course questionnaire, participants were asked if they would recommend this course to a friend and explain why. Their textual explanations served as subjective raw data for the qualitative analysis. We identified and classified learning style preference categories related to MORTIF that emerged from participants’ answers to all the open questions whenever these were mentioned. Table 5 presents the learning style themes, categories, and examples of students’ explanations related to MORTIF for each category.

Interrater reliability analysis using Cohen’s kappa statistic to determine consistency among the two experts who classified the participants’ answers into the categories that emerged yielded judgment agreement: κ = .876 (95% CI, .826 to .916), p < .001.

Research Ethics

The study was approved by the Institutional Behavioral Sciences Research Ethics Committee on April 10, 2014.

Results

RQ1 aimed to examine the contribution of MORTIF to breaking down the learning practice into meaningful (learner control) and improved graduality of assignments’ difficulty level (i.e., basing them on previous knowledge), as well as understanding the usefulness of the feedback at each difficulty level. To examine the pedagogical usability of MORTIF based on objective data, we analyzed the participants’ performance while working on MORTIF based on the usability of OPCloud-edX server log. Considering the number of participants (N = 295) and the 41 MORTIF-type assignments embedded in the course, we analyzed 12,095 submissions. Of these, only 2% submitted a correct solution in their first submission, 49% were resubmitted once after receiving formative feedback, 34% twice, and 16% three times. Analyzing the number of MORTIF submissions by task type (unit, section, final) using repeated measure ANOVA revealed a significant difference between the three (F(2, 588) = 632.37, p < .001, η2 = .68), indicating that participants used more submissions for MORTIF final tasks (M = 2.20, STD = .44) than for the section tasks (M = 1.88, STD = .29) or unit tasks (M = 1.20, STD = .34). Likewise, the number of submissions for section tasks was significantly higher than for the unit tasks. These findings indicate a clear distinction between MORTIF task types, increasingly frequent use of the formative feedback and resubmitting option, and the increasing task difficulty level from unit to section and final tasks.

Subjective data analysis yielded similar results. We analyzed students’ perceived contribution measured regarding the three MORTIF task types—unit, section, and final. The total average perceived contribution was 8.38 of 10 (STD = 1.54). Analyzing the differences between the perceived contribution of the three task types using repeated measures ANOVA revealed a significant difference (F(2,588) = 121.13, p < .001, η2 = .29.) Post hoc Bonferroni tests indicated that the perceived contribution of the MORTIF unit tasks—the most basic ones (M = 7.67, STD = 2.14)—was significantly lower than the ones in the section tasks—the intermediate ones (M = 8.85, STD = 1.26)—and the final tasks—the most complex ones (M = 8.62, STD = 1.67). A significant difference was likewise found between the section and final tasks. These findings indicate a very high perceived contribution for all MORTIF, as well as clear distinction between the three MORTIF types with increasing contribution and as both the task difficulty level and the frequency of feedback and resubmission increase.

Students’ explanations for the perceived MORTIF contribution gradings reinforce these findings, as exemplified below:

  • It is excellent. Both giving experience and the missing sentences direct you to what is missing in your solution and what you did wrong.

  • The feedback was excellent, it really helped to know what the difference is between my answer and the correct answer.

  • The feedback at the end of each submission helped to sharpen the points we did not touch on in the diagram and improve the understanding of their importance.

  • The feedback was without a doubt the most effective method for me to study the system.

To answer RQ2 regarding the added value of MORTIF compared to other MOOC assignment types, we analyzed the subjective data of students’ preferences. Figure 2 shows the distribution of the “5-much more” participants’ recommendation to continue and increase incorporating by question type.

Fig. 2
figure 2

Participants’ preferred problem type distribution

Comparing the preference level of MORTIF with the other question types using repeated measures ANOVA revealed a significant difference (F(5,1440) = 64.81, p < .001, η2 = .18). Based on post hoc Bonferroni tests and as Table 6 and Fig. 2 show, MORTIF has a significantly higher preference level than all the other question types. Moreover, all three visual problem types—drag and drop, image map, and MORTIF—were preferred over the other three problem types—dropdown, checkbox, and multiple choice, which are textual in nature.

Students’ explanations for their MORTIF preference to this question reinforce our findings. They explained, for example:

  • Such questions are undoubtedly necessary in the course.

  • These questions helped understand how to use the system and how to model what we wanted.

  • These questions are the most important ones because in the end what we do in the course is exactly that.

  • Of all the questions, these questions contributed very practically, and the feedback was helpful.

  • The MORTIF questions were excellent! Really hands-on practicing the course content in a good and correct way.

  • Dealing with MORTIF greatly helps in understanding how things actually work. The feedback helped a lot in understanding the mistakes and spared frustration when the answers were inaccurate.

  • Constructing the model really made the learning deeper than all the other problem types.

RQ3 called for identifying flexibility aspects of pedagogical usability that can be assigned to MORTIF. Specifically, we looked for learning style characteristics that are reflected in participants’ reference to MORTIF components.

To answer this question, analyzing the open-ended questions, we identified nine categories of learning styles that emerged from the participants’ explanations and classified them into the four FSLSM themes. Table 7 presents the learning style categories, their themes, and frequencies of each, as analyzed from participants’ open question explanations regarding MORTIF.

Table 7 Learning style themes, categories, and their frequencies, as reflected in participants’ reference to the MORTIF-type problems

The active-reflective learning theme was the most frequent one by far. Within this theme, active learning, meaningful learning, and feedback are the three most frequent categories of all the nine, and they are reflected in 69% of the participants’ reference to MORTIF. Moreover, students indicated MORTIF as significant in all the four learning style themes.

Discussion

The goal of this study was to examine the pedagogical usability of MORTIF—Modeling with Real-Time Informative Feedback—a new component developed and implemented within the MBSE with OPM xMOOC in the EdX environment. Findings indicated a clear distinction between MORTIF-type tasks, suggesting that MORTIF assignments cater to the learner control aspect. This aspect enabled the realization that these xMOOC learning materials were segmented into small units that significantly reduced learner overloading. While it is difficult to define the optimal learner loading level, an accepted convention refers to presenting 7 ± 2 items or concepts (Jahnke et al., 2020; Nokelainen, 2006; Zurita et al., 2019). The Cognitive Load Theory (Sweller, 2011; Sweller et al., 1998; Wirzberger et al., 2020) also refers to a limited working memory resource capacity of around four stored information items. Within the unit-level MORTIF assignments, learners could practice a narrow, focused aspect of the taught content. More challenging section-level MORTIF assignments practiced content taught in three to five units. The most challenging final MORTIF assignments summarized practiced the whole course content. At all levels, avoidance of overloading short-term memory was a leading guideline. Based on both quantitative and qualitative findings, MORTIF was instrumental in conceptualizing the learning materials and retaining them in long-term memory.

The findings that confirmed the increasing task difficulty level from unit to section to final assignments confirm that MORTIF assignments leverage the previous knowledge aspect of pedagogical usability. This aspect emphasizes the importance of both encouraging learners to use previous knowledge to accomplish learning tasks and the cumulative nature of learning (Nokelainen, 2006; Zurita et al., 2019). Cumulative learning relates to cognitive processes of continuous information acquisition and old-new integration (Thórisson et al., 2019). Unification, as coined by Vygotsky and Cole (Vygotsky & Cole, 1978), is the main process of the cumulative learning, in which new data integrates with already-acquired knowledge and is incrementally compressed and generalized. Within the unification thinking process, incorrect knowledge is replaced by current, correct knowledge, making it efficiently applicable to learning (Thórisson et al., 2019). A correct answer is sometimes mistakenly considered a significant learning achievement, but to be considered complete, learning should be cumulative and include repeated practicing (Karpicke & Roediger, 2008). The cumulative and repeated practicing principle is manifested in MORTIF assignments through increasing difficulty with each level including and building on the previous one: A unit assignment includes one unit material, section assignments include all its unit materials, and final assignments include materials from all sections.

We found that the frequency of using MORTIF’s formative feedback and resubmission option increased along with the task difficulty level, illuminating the feedback aspect of the pedagogical usability. This finding guided us to provide immediate feedback in order to help students correct mistakes and misconceptions (Nokelainen, 2006; Zurita et al., 2019). The textual feedback that MORTIF provides to tasks of constructing conceptual models, which are graphical, reinforces the integration between the learner’s visual and verbal channels (Mayer, 2017). The findings that indicated the relationship between using the feedback and the difficulty level of the MORTIF assignments validate the effectiveness of MORTIF’s formative feedback. Further evidence of the feedback effectiveness can be found in the students’ verbal explanations, e.g., “The feedback was without a doubt the most effective method for me to study the system.” These findings are consistent with other studies about formative feedback effectiveness (Barana et al., 2021; Hattie & Timperley, 2007; Narciss, 2013).

Preference level of MORTIF assignments was the highest of all other question types. Students requested that we incorporate in the course more MORTIF-type assignments, explaining, for example, that “Of all the questions, these questions contributed practically the most, and the feedback was helpful.” These findings indicate MORTIF’s added value aspect of the pedagogical usability. New digital learning materials are expected to introduce added value on top of traditional ones (Nokelainen, 2006; Zurita et al., 2019). MORTIF’s added value lies in students’ hands-on practice and receiving immediate formative feedback. Hands-on experience pertains also to the applicability aspect of the pedagogical usability since conceptual modeling is one of the important skills for scientists and engineers in general (National Research Council, 2013) and for system engineers in particular (Dori, 2016; Crawley et al., 2011). MORTIF enables not only learning MBSE theoretically but to practice conceptual modeling and receive immediate feedback.

Visual and textual information types trigger different levels of recall. Visual data is known as easier to both short- and long-term memory recall (GILES et al., 1982). Indeed, all the three xMOOC visual problem types—drag and drop, image map, and MORTIF—were preferred over the three textual problem types—dropdown, checkbox, and multiple choice. Given that engineering students are known for their visual learning style orientation (Tulsi et al., 2016), this finding is not surprising. Yet, this finding highlights MORTIF as adding value as a visual learning scaffold with unique properties, such as active learning and feedback-based resubmission options that other visual learning aids lack. MORTIF is related to each of the four FSLSM learning style aspects: Considering learners’ individual differences is indicative of its flexibility. Active learning, meaningful learning, and feedback have been repeated in students’ explanations, emphasizing the nature of the MORTIF component as promoters of learning by doing and pointing to the learner activity usability aspect.

Conclusion

We investigated the pedagogical usability aspects of our newly developed MORTIF component as reflected in the MBSE with OPM xMOOC. We found evidence for seven pedagogical usability aspects: learner control, learner activity, applicability, added value, previous knowledge, flexibility, and feedback. All these aspects were designed and implemented in MORTIF, and this study has confirmed that they are also reflected while using this component. The combination of learning by doing with real-time informative feedback has thus been shown to be a combination that promotes meaningful learning effectively, as evident from students’ feedback. Based on both qualitative and quantitative findings, students with diverse learning styles and paces are strongly attracted to, greatly benefit from, and wholeheartedly embrace MORTIF.

Contribution

The originality of the intervention is in introducing a new type of interaction of students with a real modeling environment, OPCloud, embedded in a MOOC platform, into which a real-time feedback mechanism has been incorporated. This new, and arguably also novel, mechanism provides the learners with meaningful verbal comments related to the model they just submitted, allowing them to correct the model and resubmit an improved version any number of times, as set by the system. The research aimed at examining the pedagogical usability of the newly developed MORTIF assignment. Analysis of answers to the open-ended question revealed that MORTIF has been successful in catering to a variety of learning styles.

The development of the embedded MORTIF component and its associated research contributes to both the theoretical and practical bodies of knowledge on active learning in MOOCs and its relation to pedagogical usability, perceived contribution, and suitability to a variety of learning style aspects. At the theoretical level, MORTIF has been shown to greatly enhance the learning environment and advance the learning process. Benefits of MORTIF-type assignments include active learning, provision of meaningful immediate feedback to the learner, the option to use the feedback on the spot and resubmit an improved model, and suitability for a variety of learning styles. Practically, these qualities make MORTIF a learning mode that MOOC developers should seriously consider for inclusion as a central component to improve current and future MOOCs. The study contributes to educators in both science and engineering education who teach conceptual modeling of systems as a MOOC or in class.

Limitations

The number of model submissions was limited, and the data analyzed from the usability logs and the online questionnaires were anonymous, so it was not possible to link between these sources, and they had to be analyzed separately.

Recommendations

Educators and students from various domains and at various academic levels are welcome to freely use OPCloud,Footnote 2 our cloud-based collaborative modeling environment, in their face-to-face courses and MOOCs. The simplicity, expressiveness, and high accessibility of the conceptual modeling methodology and the MORTIF component we developed enable their introduction into domains of science and engineering education that have not begun using conceptual modeling of systems and phenomena. This can potentially enhance students’ thinking skills and instructors’ ability to assess those skills as their students develop them.