1 Introduction

Whether they are teachers, trainers, or professional experts, educational practitioners face the challenge of designing and planning learning situations that are suitable for a diverse range of audiences and educational contexts (Ordu, 2021; Yang et al., 2023). These courses are sometimes complex and multimodal and can be co-designed, in part or in whole, with the help of a third party such as another teacher, a instructional designer or even sometimes on large-scale projects by an entire teaching team. Such collaboration requires in-depth exchanges and compromises to determine the structure and planning of education.

To achieve this, some teams use simple Post-its that they stick on a board to discuss their course creation and visualize its granularity (Burguete & Urrego, 2023). These small pieces of paper can represent a succession of learning units with their content, or any other information related to the training. We questioned this phenomenon, especially since there are now a variety of digital and analogic tools designed to facilitate the design process, which seem far more structured than these simple pieces of paper (Komis et al., 2013). Due to the lack of data on the practice of using Post-its in the literature, we hypothesized at the start of our research that the tangible, inductive, collaborative, and non-digital nature of Post-its was absent or at least insufficiently present in existing solutions to satisfy certain design teams. Therefore, we aimed to create a new tool meeting these criteria in the form of an analogic pedagogical scripting kit. However, the inductive nature of Post-its did not seem sufficiently structured to create effective training based on criteria such as engagement, motivation, retention, or learning performance. Consequently, we sought to incorporate the concept of microlearning, which is recognized in the literature for its effectiveness on these same evaluation criteria (Alias & Razak, 2024; De Gagne et al., 2019; Taylor & Hung, 2022).

Thus, in a Design Based Research (DBR) approach (Wang & Hannafin, 2005), we developed and implemented into several versions of the kits a new modeling language based on the concept of bricks, itself inspired by granularization and microlearning. This visual language, which takes the form of tangible pieces arranged on a board, allows for the observation of the links and distribution of activities and educational resources in the form of learning units. The aim is to guide and optimize the design and analysis process of educational scenarios.

In this production, our primary objective is to empirically demonstrate the utility and usability of this new pedagogical scenario kit through a qualitative study. In this study, we chose to test MOOCs. They have the advantage of being easily accessible, but they often suffer from low pedagogical quality (Margaryan et al., 2015), low learner engagement (Bote-Lorenzo & Gómez-Sánchez, 2017; Depover et al., 2017; Tcheng Blairon & Cristol, 2020), and high attrition rates.

To do this, we will first present the other existing analogic tools as well as the concepts of granularization and microlearning that led us to create a new scripting methodology centered on the brick. We will then describe this new methodology and how it is implemented in the kit. Next, we will present the research and development of the kit and then the empirical evaluation of its utility and usability through the modeling of three distinct MOOCs. Finally, we will describe and discuss the foundations of a new Learning Design “theory” based on the concept of bricks and the new research avenues we will pursue to continue studying the tool.

2 Related work

2.1 Two existing analogical kits

Among the pedagogical scripting tools, there is, to our knowledge, no study or literature review that provides an exhaustive reference or broadly examines their uses. Our research on the existing state of the art led us to discover six digital solutions (LearnSpirit, Learning Designer, OAS, Parcooroo, PedagoMaker, and Modulo) and two analogic kits consisting of printed cards (ABC Learning Design and Learning Battle Cards). Since our goal was to create a new analogic tool using an inductive approach similar to that of Post-its, in this article we focused only on these latter two tools to identify and understand their main specificities.

ABC Learning Design is a free downloadable tool that created in 2015 by Nataša Perović and Clive Young of University College London (UCL) (https://abc-ld.org/). This academic product (Perović & Young, 2020) is, according to its authors, widely used in Europe and has been the subject of numerous publications listed on their website. It helps facilitate about 90-minute workshops in a dynamic and engaging way. Participants are teams of teachers who collaborate to think together about the design or analysis of teaching activities sequences. The kit consists of three distinct elements. First, printed maps representing the six learning modalities from Diana Laurillad’s “Conversational Framework” theoretical model. On the back, these cards present concrete examples of activities to be organized. In addition, the kit includes a work plan permitting the cards to be arranged according to a defined time structure. Finally, a dedicated sheet provides a clear view of the hybridization level and the respective importance of the different learning modalities. It is cooperative in nature and access is free. Besides, the advantage of this tool is its ease of use by teams during workshops. It seems to be more suitable for a novice audience of students or casual trainers rather than training experts because of its 90-minute workshop format.

The second tool we have identified is a commercial product called Learning Battle Cards (https://store.learningbattlecards.com/). This tool was created in 2011 by Polish Sławomir Łais to, according to his website, “close the gap between instructional design and design thinking”. The kit consists of a set of 110 cards, printed on both sides, representing various learning methods and their uses. It is possible to purchase an additional Canvas to position the cards and thus facilitate visualizing training. Like ABC Learning Design, this approach represents a tool-driven design methodology designed to guide the user in creating or re-engineering courses. The major advantage of this method is its playful nature and that it can present an abundant variety of methods and activities, which can be particularly interesting to trainers looking for new ideas. So, just like the ABC Learning Design, this kit is based on pre-printed activity cards that guide the user in what we call a deductive approach, as opposed to the much more open and heuristic Post-it practice.

In conclusion, the operating principles of these two existing solutions seemed sufficiently different from our project to allow us to proceed. The idea of offering a pedagogical scripting kit with an inductive model seemed new and relevant to enhance the creativity and expertise of designers. However, a structuring theoretical framework was lacking for the kit, which would enable both the efficient design and analysis of training programs. This is why we decided to integrate the concepts of granularization and microlearning into a new methodology, considering both their advantages and limitations.

2.2 Granularization and microlearning

The benefit of breaking down and distributing learning over time has been known for a long time, notably since Ebbinghaus’ work on memorization (Ebbinghaus, 1885). It is now well established in cognitive psychology that distributed learning is superior to massed learning for memorization because it incorporates rest periods that prevent neuronal “exhaustion” and facilitate consolidation (Lieury, 2015).

However, the concept of granularization is not an autonomous concept, although some authors have attempted to define it (Littlejohn & Shum, 2003). There is a great variability in the size of segments and even in the terminology used depending on the research disciplines. For example, in microlearning, which we will discuss in more detail in this subsection, Eibl (2007), a researcher in education sciences, defined granularization in terms of breaking down content into small, autonomous units linked by specific learning objectives. In cognitive psychology, this principle of granularization is frequently used but appears under different terms. To facilitate learning, understanding, or memorization, the size of the object resulting from this idea of segmentation varies according to the developed concepts. For example, the concept of “segmentation” (Clark & Mayer, 2016) allows courses to be divided into smaller parts. The concept of “chunking” (Miller, 1956) is directly related to working memory and involves very small objects like groupings of numbers. Moreover, many current studies based on cognitive load theory also support the idea of granularizing content to facilitate learning (Sweller, 1988). Authors such as Alias and Razak (2024) or Lee (2023) consider cognitive load theory as one of the main theoretical underpinnings behind the effectiveness of microlearning.

According to Theo Hug, microlearning is a recent concept that appeared in the early 2000s and is still poorly defined, mainly due to a lack of research in education sciences (Hug, 2007, 2022). Its development is linked to the deployment of various information and communication technologies driven by Web 2.0 (Buchem & Hamelmann, 2010). Carla Torgerson offers a definition based on a synthesis of several authors’ work. For her, microlearning is an educational experience that is short, focused and effective (Torgerson, 2021, p. 20). In this sense, she aligns with Eibl (2007), who advocated for setting precise educational objectives for training units, notably using Bloom’s taxonomy. This strategy had the main advantage of defining the size of learning units and making them autonomous while keeping the possibility of linking them. Similarly, Yahaira Torres Rivera indicates that some authors draw a parallel between Lego bricks and microlearning, which consists of ““of joining small pieces to form a figure […]. Something similar happens in microlearning, where brief pieces of information are connected to achieve learning about a topic.” (Rivera, 2022, p. 30, author’s translation).

Thus, the concept of microlearning could define the final level of the granularity process as focused, short, and effective educational experiences. In our case, in practice, a learning unit could be represented by a Post-it and associated with others to form a coherent mono or multimodal pedagogical script. However, in a broader modeling context, the inherent short nature of microlearning quickly appeared as a limitation for producing or analyzing certain training programs. Therefore, in the next chapter, we propose to adapt the concepts of granularization and microlearning into a new scripting methodology based on the concept of “bricks”.

3 Description of the scriptwriting methodology and the main principles of the pedagogical scenario kit

3.1 A brick-based pedagogical design methodology

To produce a methodology that would guide the design of an a priori training (prescriptive scenario) or its a posteriori analysis (descriptive scenario) to carry out a pedagogical diagnosis, we first sought means to that end and looked at the concepts of granularization and microlearning. Their common characteristics such as small size, short duration, and their potential for multimodality (Fidan, 2023; Kohnke et al., 2023) seemed ideal for distributing learning and optimizing training (Celik & Cagiltay, 2023; De Gagne et al., 2019; Kapp & DeFelice, 2019; Leong et al., 2021). However, we quickly realized these characteristics, which have long been known, particularly in cognitive psychology, to be effective for a number of outcomes such as memorization or attention (Ebbinghaus, 1885; Kelley & Whatson, 2013; McBride & Cutting, 2019; Toppino et al., 1991), were not sufficient to model all existing forms of courses. Indeed, many training courses such as university lectures, often characterized by massive content, could not be easily segmented. Therefore, we have refocused our thinking on adjusting these characteristics to make them compatible with all types of training. With this in mind, we identified three key variables: the number of tracks (from one to many), their sizes (reduced to massive) and their durations (short to long). These variables constitute a more flexible and adaptable attempt at modelling, considering the diversity of training structures and contents. As a result, granularization and microlearning, which we initially considered as a means to script, have, in our methodology, been adapted rather as a goal to be achieved during design, in order to facilitate the distribution of learning and reduce the duration of activities.

We realized it was very important each piece content should be able to model not only moments of formal teaching, but also informal learning periods. This approach aims to make our model capable of representing the diversity of teaching and learning formats.

In addition, as our modeling approach is based on a temporal sequence, we also wanted to be able to represent breaks and resting moments. These elements, although apparently non-educational, are nevertheless, as we saw earlier, of utmost importance in the process of memorization and assimilation of knowledge.

The need to contextualize a training course in time and space prompted us to give kit users the possibility to define different modalities for each piece, by specifying its synchronous or asynchronous, face-to-face, or remote natures, as well as the activity’s collaborative or autonomous nature. It should be noted that, for each piece, the duration to carry out the activity can be defined a priori by the teacher or left to students’ discretion, especially in an asynchronous and remote context.

The choice of the term “brick” aims to embody, in a precise and versatile way, our model fundamental elements. We first explored various existing expressions such as Learning Unit, Learning Object, Learning Nugget, Educational Capsule, Quick Learning, Snackable Learning, Bite-sized Learning, Nano Learning, Tiny Courses, Spaced Learning, Rapid Learning, among others that are widely used in the literature. Although these terms reflect similar pedagogical approaches, often characterized by brevity, specificity, and ease of assimilation, they did not fully correspond to all the dimensions we wished to encompass. Some were too specific to particular fields, such as the association of Learning Object with computer science, while others lacked the flexibility to represent the diversity of practices. That is when the term “brick” emerged as the most appropriate choice. As Yahaira Torres Rivera pointed out (2022) with the Lego game, the brick metaphor encompasses modularity, flexibility of assembly, and the ability to build something solid from fundamental elements of all sizes. These bricks are universal, as is our pedagogical approach, which aims to be adaptable to a variety of educational contexts, ranging from formal to informal. Moreover, the use of this term transcends the connotations specific to a particular field, providing linguistic flexibility that perfectly matches the diversity inherent in our approach. Thus, the choice of the term “brick” is not just a semantic decision, but a strategic deliberation to encapsulate the versatility, solid foundation, and ability to assemble inherent in each of our pedagogical approach elements.

Now we have detailed the context of the kit creation and conceptual bases, we propose to move on to describing how the educational scenario kit works.

3.2 How the educational scenario kit worksFootnote 1: the principle

The kit Eduscript Doctor comes in the form of a box that contains several pieces to be assembled and arranged on a plan. It requires erasable color markers to write information on the different pieces. The content is divided into two distinct parts, with parts of the tray being found systematically and main and complementary pieces varying in number and location depending on the design (Fig. 1).

Fig. 1
figure 1

Overview of the Eduscript Doctor pedagogical scenario kit (Prototype 2.4)

Unlike the other kits, it deliberately does not offer any activities, so as not to limit the designer to the specific proposals integrated into the tool, and thus encourage his creativity.

In accordance with the major guidelines previously mentioned for creating the tool, the user is invited to follow an inductive approach using the Post-it practice as a reference, with pieces to be filled in. However, our intention was to adjust the design experience towards more than just a blank sheet of paper, with a view to facilitating the creative process. In our approach, the brick, which is our generic Learning Unit, is equivalent to an optimized Post-it. In Fig. 2, each part is provided with several essential pieces of information.

Fig. 2
figure 2

Information to note about each skill representing an activity and possible links

First, a unique code is used to identify each activity. In addition, it is necessary to specify the activity’s duration, type (active or passive), the associated learning objectives, the learning modality (synchronous or asynchronous, face-to-face or distance, individual or collaborative), and the learner’s role (actor, spectator, or passive). Possible links to other pieces, identified or not by a code, complete this information, offering the possibility to enrich activities with additional resources, to view reminders, or access other activities. The brick part of the kit can be found in Appendix 1 – Table 7, ID 1. We observe that defining a precise objective makes the brick autonomous and that, if its duration is also short, this unit corresponds to Carla Torgerson’s definition of microlearning as a focused, short, and effective educational experience.

In the context of course scripting, multiple bricks and their targeted objectives can complement a set of more general objectives that will be defined at the level of a piece of the kit dedicated to sessions (Appendix 1 – Table 6, ID 2) or more broadly on a large piece representing a sequence (Appendix 1 – Table 6, ID 1).

To further facilitate the design process, we created, in addition to the general-purpose brick mentioned earlier, three other main pieces with the same characteristics but specialized (Appendix 1 – Table 7). The first is dedicated to communicating educational objectives to learners (blue piece, Fig. 3), the second to all types of assessments (green piece, Fig. 3), and the third to learning how to use an artifact (instrumentation process (Rabardel, 1995) (Appendix 1, Table 7, ID 4).

Fig. 3
figure 3

Principle of multiple linkages between two sessions different activities and resources

With these four main pieces more or less associated and linked, the objective was to lead the designer to communicate educational objectives to learners, promote the use of assessments, particularly formative, visualize the distribution of learning, visualize the type of learner engagement (active or passive), and finally manage and make visible the links between activities with recall or call pieces (Appendix 1 – Table 8, ID 1 and 2).

As far as links are concerned, the principle is based on assigning a code to each Learning Unit or brick, identified in Fig. 3 by letters. The kit’s coding system will be more complex, so as to better locate each of the pieces in the sequence and sessions.

We observe that in activity G of session 1, a reminder of activities B and C is carried out using a dedicated piece. At the end of sessions 1 and 2, evaluation activities show up in green and the content to be evaluated is clearly marked out (GH and GOP). It should be noted it is possible to establish links between sessions 1 and 2 as in activity N, thus allowing recalls from one session to another (D in session 1 and KL in session 2 in N). In addition, for the activities dedicated to the definition of the pedagogical objectives represented by the blue pieces, the links offer the possibility to specifically identify the content that will be highlighted, as for activity J. In our case, the result of assessment I will modify session 2 learning objectives, thanks to the link from I to J.

Finally, in Fig. 4, we can see that an activity can be composed of one or more resources or become a resource for another activity using a link. Several complementary and optional components are integrated into the kit to remind the user that they can proactively (before) and retroactively (after) include evaluations, motivational elements, and feedback in an activity (module) (Appendix 1 – Table 8).

Fig. 4
figure 4

Principle of modelling an activity consisting of resources or one or more other activities

In this example, we can see the activity scheduled after the break (red piece (Appendix 1 – Table 8, ID 8)) includes resources related to motivation, as found at the beginning of the activity (purple piece (Appendix 1 – Table 8, ID 6)), proactive feedback (yellow piece (Appendix 1 – Table 8, ID 5)) and then, at the end of the activity, a retroactive evaluation (green piece (Appendix 1 – Table 8, ID 4)). It continues with another activity that also includes a retroactive assessment.

In conclusion, faced considering the possibility of creating a new methodological tool that meets this dual requirement of orientation and creativity stimulation during the scriptwriting process, we have set ourselves a threefold research objective. First, so as to design a scripting methodology that favors a heuristic approach based on an adaptation of granularization and microlearning concepts. Then, to create an analogicalscriptwriting tool that would embody this new methodology through a well-suited visual language, whose main operating principles we have already outlined. Finally, as a last objective, evaluate theses usages of this tool in ordinary contexts to assess its strengths and limitations.

To achieve these objectives, we conducted a design process continued in usage (Goigoux, 2017) similar to DBR (Renaud, 2020; Wang & Hannafin, 2005). This approach unfolds in three sequential steps. The first step involves the initial design of the tool and its methodology. The second step is dedicated to their improvement with the participation of practitioners. The third step focuses on the external evaluation of the tool. In this final phase, our article is limited to studying the utility and usability of the kit by modeling a series of three different MOOC-type training cases. Our research question is: “Is the Eduscript Doctor pedagogical scenario kit useful and usable for designing MOOCs?” We then formulate the following main hypotheses to assess utility and usability respectively: firstly, the scripting kit enables a clear visualization of the structure of a MOOC, thus facilitating the evaluation of its pedagogical design. Secondly, we postulate that the kit is suited to an audience of instructional designer.

4 Methodology

4.1 Research & development of Eduscript Doctor

A research and development (R&D) process for the tool took place from December 2021 to May 2023, following an agile similar to DBR approach (Wang & Hannafin, 2005). The design began with the creation of an initial series of prototypes in a Fablab, using recycled products. Once the first manufacturing process was stabilized, 26 examples were developed and tested in 11 different centers in France, including universities, adult education centers and tennis clubs. Each center has signed a Material Transfer Agreement and has completed at least 3 h of training on how to use the kit, either online or one-on-one.

The informal feedback obtained during training sessions, as well as interviews and suggestions for modifications received via email, have been carefully considered in developing the kit. Subsequently, a more formal questionnaire comprising 22 open and closed questions was sent to all teams to assess usage, covering utility, usability, and acceptability (Béguin & Cerf, 2004; Renaud, 2020) (Table 1).

Table 1 Survey to assess the utility, usability, and acceptability of the kit

From January to March 2023, 22 responses from unique users or teams were collected. Based on the various comments, further exchanges via email or videoconference took place to improve the tool. The main critiques were related to the kit’s usability. For example, the tokens related to cognitive, motor, and emotional educational objectives positioned above the main pieces were extremely long to put away in the dedicated bags. Consequently, these tokens were directly integrated into the main pieces. The taxonomies used to define the educational objectives were difficult to master, and some pieces had pictograms that were hard to understand. To address this, a “‘Memo” piece was added that includes a modified version of Bloom’s taxonomy for the cognitive field (Anderson & Krathwohl, 2001; Bloom et al., 1956), Berthiaume and Daele’s taxonomy for the motor and emotional fields (2013), and legends for the various pictograms, as detailed in Appendix 1, Table 6, ID 6.

Lastly, as a significant improvement, the organization of the pieces was enhanced using a thermoformed plastic tray. Due to the reduction in the number of pieces and the use of new materials (polypropylene) to produce the pieces, the weight of the kit was significantly reduced by approximately 57% between version 1.7 and the final version 2.4 (from 6.61 lbs to 2.87 lbs).

Thus, in September 2023, a stable version of the kit (v. 2.4) in French and then in English was created after 14 intermediate versions that took all these parameters into account.

The name “Eduscript Doctor”, chosen in September 2023 to designate this tool, was carefully selected to capture and embody the core academic values of this analogical instrument dedicated to pedagogical scripting.

To test and exemplify the use of the pedagogical scenario kit, it was necessary to consider the possibility of modeling any type of training, ranging from the simplest teaching scenario (massed unimodal) to the most complex (distributed multimodal). In this study, our choice of the type of training to focus on MOOCs due to their ease of access but primarily because of their numerous critiques for low educational quality (Margaryan et al., 2015), leading to low engagement (Bote-Lorenzo & Gómez-Sánchez, 2017; Depover et al., 2017; Tcheng Blairon & Cristol, 2020) as well as high attrition rates (Sun et al., 2019). As a result, the modeling of three MOOCs that had not been previously designed with the tool seemed relevant to us to initially assess its utility and usability.

4.2 Conduct of the study

A call for projects was launched on internet on 14 October 2022 to find 3 teams willing to test the kit for future re-engineering of their MOOC. Three teams quickly came forward to participate in the research protocol. Participation involved receiving training on the kit (either remotely or in person) and then testing the kit on their MOOC in a second phase.

The three teams wanted to model their MOOC with a more or less re-engineering objective (Table 2). The majority of those trained in using the kit were instructional designers (or similar professionals) as well as audio-visual technicians who had participated in the scripting.

Table 2 Pedagogical teams and MOOCs that participated in the study

The training was divided into two distinct parts. The first part consisted of presenting the kit research and development, the link between the kit, pedagogical scripting and microlearning. Workshops were organized to enable the creation of increasingly complex and multimodal scenarios by gradually using the different parts of the kit (Appendix 1). In the second part of the training, the focus was exclusively on the scripting of their MOOC with dedicated support.

An initial modeling was carried out using the French 1.7.2 version of the kit, but was updated for the article in the September 2023, 2.4 version and then translated into English. Durations in the code dictionary are calculated based on the types of resources used or empirically determined. As for the text, reading time was set via online tool Textconverter.io with the “voice-over” and “Normal” playback rhythm settings. Videos total watch time was used. An acceptable average duration was defined by the team for evaluations or other activities that do not have a fixed time.

It is important to point out that all three teams also took part in the research and development of the kit after this study, which means that, thanks to their active participation, they were familiar with the evolution of the parts.

To evaluate the utility and usability of the kit, a qualitative approach based on observations and non-directive interviews during the training and modeling was used. The evaluation criteria were selected from the questions in Table 1 of the questionnaire. Only questions 5 and 6 on utility were not suitable for our evaluation context.

In the next chapter, we detail the results of the modeling of three MOOCs developed using the pedagogical scenario kit by the three design teams. Then, we provide a synthesis of the observations and non-directive interviews conducted with the three teams to evaluate the utility and usability of the kit and its methodology.

5 Results

Two different platforms are used to disseminate 3 MOOCs whose scripts we will describe. The first is that of Team 1 for the MOOC “Guidance education” (Fig. 5), and the second that of Teams 2 and 3 for the other two MOOCs “Everything you need to know about itching” and “Adolescent development”. In both platforms, it will always be necessary to keep in mind the modeling represents a path logic even if, in the end, this path is not followed in its linearity or in its entirety by a typical learner.

Fig. 5
figure 5

Screenshot of the general organization of the MOOC “Guidance education

Modelling these different training courses is carried out in its entirety using the 2.4 English version of the pedagogical scenario kit, with precision and without any synthesis. They offer a linear and temporal graphical representation of the different possible learners’ paths extracted from the platforms. In this study, it is important to remember the 3 models presented are only the result of a retrospective analysis carried out with the teams and that the scripting methodology was not used to create them.

The three MOOCs were ranked from the simplest to the most complex. Each scenario is followed by a simplified code dictionary for readability. In Appendix 2, a complete code dictionary for each MOOC (Appendix 2 – Tables 9, 10 and 11) lists each Learning Unit, any objectives, durations and some information on the activities. The precise description of each of the activities and their content is not included in the article, in order to focus it more on form than substance. Nevertheless, the pedagogical scenario obviously has a very strong impact on the proposed disciplinary content, since it must be suited by designers to allow constructive alignment (Biggs, 1996). The goal is to ensure that assessments and learning objectives are well aligned with the learning activities implemented.

5.1 First MOOC: a resource library

The MOOC Guidance education is presented on the Team 1 website with three different spaces that have been identified in Fig. 5 with 3 colored areas. The training is always available to learners and all year round. On the right, in blue, can be found the table of contents that can be navigated by clicking on the links. At the top, in orange, a frieze allowing the user to locate himself in the course, using grey tiles (Figs. 5 and 6).

Fig. 6
figure 6

Choice of the division of the MOOC into sessions by the teaching team

And finally, in green, the course itself offering an activity or several resources, depending on the navigation in the table of contents or in the timeline.

The pedagogical team considered there was not a single two-hour session, but the division of a sequence into several sessions. For this reason, Fig. 6 depicts 7 sessions leading to the issuance of a certificate, with the first session, S0, corresponding to the training presentation.

MOOC 1 has been fully modeled into four distinct and chronological parts, respectively illustrated in Figs. 7, 8, 9 and 10, to facilitate its interpretation as a single, cohesive entity. The same approach will be adopted for describing the other two MOOCs.

Fig. 7
figure 7

Sequence and sessions 0 and 1 of the “Guidance education” MOOC

Fig. 8
figure 8

Sessions 2 and 3 of the “Guidance education” MOOC

Fig. 9
figure 9

Sessions 4 and 5 of the “Guidance education” MOOC

Fig. 10
figure 10

Session 6 of the “Guidance education” MOOC

In the modeling in Fig. 7, the sequence banner indicates the sequence number, the MOOC title, a 2-hour training duration, a division into 6 sessions as well as cognitive, motor and emotional objectives (BBA) (Appendix 1, Table 6, ID 6) on the sequence, which were determined a posteriori by the pedagogical team. Objectives were also assigned to the different activities during modelling, using the tool to show their educational autonomy. The learning activities panel shows 8 items, 4 of which are active and 4 passive. With the help of a video and text, session 0 (before the start of the MOOC) sets objectives with a presentation of the training. It makes 4 links using the “Recall – Call” pieces (Appendix 1 – Table 8, ID 1 and 2) to a Facebook community, the sites TrouveTaVoie and Inspire, plus an explanation of how to obtain the certification. This last piece makes a link with code Sq 1 S X U X, to indicate the certificate is obtained for the entire training, i.e., Sq 1, since the S session and the U unit are not filled in. It should be noted that the concept of brick, granularization and microlearning are easily applied to this MOOC, since each activity resulting from a broader content is short and identifiable with a specific autonomous content and that, therefore, it is possible to associate a code, as well as a duration, to identify them.

For the rest of the modeling (Figs. 8, 9 and 10), we have chosen to dissociate the sessions, using letters a and b to differentiate the courses and the files (resources) made available. Sometimes, activities do not have a duration since they are symbolic, such as SQ1 S0 U2 and SQ1 S0 U5 (Fig. 7) to make the link with social networks and the final certificate, respectively. In Session 1a, we can see links between Session 0 and Session 1b (resources).

Sessions 1a to 5a (Figs. 7, 8 and 9) have the same pattern with, firstly, two passive activities consisting of videos and texts and, secondly, a passive activity with a podcast and text. Each of the passive activities links to dedicated resources.

Only session 6 (Fig. 10) is different in the way activities are organized, by proposing a synthesis along with new resources.

The code dictionary (Table 3 (Appendix 2 – Table 9)) enables you to visualize the presence of 50 main pieces and visualize the sequence and sessions durations. It should also be noted the activities proposed in the training contain only resources and that they do not integrate any collaborative activities into the scripting. We can also see that reading and viewing the content of the Learning Units takes learners a minimum of 2 h and 5 min, if they follow the entire sequence. Remember the training time indicated in the banner in Fig. 5 was 2 h.

Table 3 Simplified code dictionary of the “Guidance education” MOOC

For these various reasons, it turns out this initial MOOC does not directly promote the active engagement of a learner in a real learning experience. Therefore, we felt it was more appropriate to classify it more as a resource library than as a course per se.

5.2 MOOC: a transmissive pedagogy

The MOOC platform interface for teams 2 and 3is always divided into three main areas, as shown in the image below: horizontal menu (1), course tree (2), navigation bar (3).

Figure 11 shows there are several navigation zones. In the red box (1) there are several tabs that are also configurable by the teaching team and therefore different depending on MOOCs. Access to the course, the forum (discussion tab) and the student’s progress (grades obtained in the course) is done at this level.

Fig. 11
figure 11

Screenshot of a MOOC with a dedicated page for discovering the platform

Two other areas we bring up with green (2) and yellow (3) boxes are specifically dedicated to the course. The left-hand side (2) enables you to visualize the teaching tree in the form of a breakdown of the disciplinary content that is presented either by week or by modules or sequences, according to the terminology used by the teaching teams.

It can be observed that Team 1’s previous platform (Fig. 5) is generally simpler and more linear in its organization with fewer tabs compared to Fig. 11. This organization has implications for the modeling of courses as it allows designers to more easily reveal connections between various elements such as forums and informational emails. This approach will be utilized in the third MOOC to highlight advanced community management strategies.

The MOOC “Everything you need to know about itching” which is offered in the “Health” category was available on internet from 5 December, 2022 to 17 April, 2023. On the course presentation page, it was announced it would be broadcast for 6 weeks, that it would require a 12-hour effort and a work rhythm of about 2 h per week. This information is listed on the sequence banner with the sequence number, the MOOC title and sequence objectives a priori determined by the team (Fig. 12). It is also noticeable that the objectives of the sessions and Learning Units are identified, even though they were a posteriori defined by the team during analysis with the tool.

Fig. 12
figure 12

Week 1 of the “Everything you need to know about itching” MOOC

The first week of training begins with access to a module entitled “To get started before taking this MOOC” which we have coded as “Session 1a” to differentiate it from “Session 1b” exclusive to the course (Fig. 12). Session 1a includes the start of the training email with a regulation strategy (yellow piece (Appendix 1 – Table 8, ID 5)), a number of explanations necessary for using the environment and its operation (orange piece (Appendix 1 – Table 7, ID 4) as well as an optional questionnaire to identify learners’ profile (green piece (Appendix 1 – Table 7, ID 3)).

Compared to the other weeks (Figs. 13, 14 and 15), it should be noted that session 1b (Fig. 12) has the particularity of not offering a summative evaluation. Weeks 2 to 6 (Figs. 13, 14 and 15) are then all organized in the same way with an email at the beginning of the session, session objectives (blue piece (Appendix 1 – Table 7, ID 2)), course videos with text (between 2 and 4 videos), an automated quiz-type assessment and then an optional forum, considered as a collaborative activity. The 7th session is not to be considered as an additional training week. It is dedicated to taking an evaluation questionnaire and how to obtain an open badge. It should be noted that learners are encouraged to complete the questionnaire, hence the motivation piece (Appendix 1 – Table 8, ID 6) that is attached to the green piece relating to the assessment.

Fig. 13
figure 13

Weeks 2 and 3 of the “Everything you need to know about itching” MOOC

Fig. 14
figure 14

Weeks 4 and 5 of the “Everything you need to know about itching” MOOC

Fig. 15
figure 15

Weeks 6 and 7 of the “Everything you need to know about itching” MOOC

The code dictionary (Table 4 and Appendix 2 – Table 10) includes 50 different activities, including assessments, for a total of 4 h and 28 min of writing time. This duration is 2.7 times smaller than the 12 h of learning time planned by the teaching team, i.e., a ratio of almost 1 to 3 of the estimated learning time. The activities panel shows 6 activities, 4 of which are active and 2 passive.

Table 4 Simplified code dictionary of the “Everything you need to know about itching MOOC

Although evaluations are scheduled at the end of the MOOC, the activities offered are mainly video resources (activity number 2) to be watched without any real strategy to guide the learner towards an active attitude. That is why this course has been categorized as a MOOC of a transmissive nature.

5.3 Third MOOC: from transmissive pedagogy to a more active pedagogy (community management)

The MOOC “Adolescent development” offered in the “psychology” and “sociology” categories was accessible for the last session from 2 October 2018 to 4 December 2018 (Figs. 16, 17, 18, 19 and 20). The course presentation page quoted an 8-week training time, an effort of around 16 h if putting in about 2 h per week.

Fig. 16
figure 16

Week 1 of the “Adolescent development” MOOC

Fig. 17
figure 17

Weeks 2 and 3 of the “Adolescent development” MOOC

Fig. 18
figure 18

Week 4 of the “Adolescent development” MOOC

Fig. 19
figure 19

Week 5 of the “Adolescent development” MOOC

Fig. 20
figure 20

Weeks 6 and 7 of the “Adolescent development” MOOC

The first training week was divided into sessions 1a and 1b to facilitate the separation of the courses from particular real or symbolic resources (Fig. 16). Namely, the MOOC presentation page; the email at the beginning of the MOOC; how the certification and the platform work; the presence of a forum; a Padlet and then specific actions such as “Let’s get acquainted”, “Motiv’ Tuesdays” which summarize what is covered, the “Monday recap”, which is a reminder of what has been covered and finally the optional questionnaire at the beginning of the MOOC. Weeks 1b, 3, 4, 5 (Figs. 16, 17, 18 and 19) are made up of instructional videos, comic strips and, at the end of the week, testimonies from teenagers, with text. Weeks 2 and 6 (Figs. 17 and 20) are organized in the same way, but without the comic strips. All activities with videos link to the forum and all course videos are followed by formative assessments to ensure learners have understood. These evaluations are also summative since they are geared towards obtaining the training certificate (Fig. 20). Each week, learners are sent two emails, except for the 6th and 7th week (Fig. 20), when they get sent only one. In week 3 (Fig. 17), a symbolic and non-lasting social media activity (Sq1 S3 U3) is offered, which is linked to the activities that follow. Each week, a “Transcripts” activity is offered in isolation before the “Teenagers’ words” activity. The last week (Sq1S7, Fig. 20) is atypical compared to the others. It is dedicated to the conclusion of the training. Again, an email is sent; how to obtain the certificate under conditions; the end of the MOOC questionnaire as well as a link to an optional conference.

The MOOC offers 76 different activities with a total duration of 5 h and 52 min (Table 5 and Appendix 2 – Table 11).

Table 5 Simplified code dictionary of the “Adolescent development” MOOC

It is worth noting that, to calculate the duration of activities incorporating a video followed by a formative evaluation, the team arbitrarily decided to add 2 min to videos durations, to take into account evaluation time. Nevertheless, we can see the entire course is still about a ratio of 1 to 3 compared to the 16-hour learning time, planned by the teaching team. The activities panel shows 12 activities, 4 of which are active and 8 passive.

To compensate for such massive use of resources (videos and a few comic strips), designers tried to make learners active and encourage their engagement, with the help of formative assessments, a forum, a Padlet and numerous email reminders. This is why, unlike MOOC number 2, we prefer to classify this MOOC not as a MOOC with a purely transmissive pedagogy, but rather as a hybrid, integrating elements of both transmissive pedagogy and active pedagogy.

After studying the design of the 3 MOOCs produced by all 3 teams, we will present the main results of the qualitative study aimed at evaluating the utility and usability of the pedagogical scripting kit.

5.4 Synthesis of the kit usefulness and usability assessment

First, during the MOOCs training and modeling by all three teams, we observed a difference in professional experience. Team 1 was generally older and more experienced in instructional design compared to Teams 2 and 3, which were made up of younger practitioners. Consequently, expectations from the kit were different. Team 1 had a greater desire to evaluate their MOOC’s quality, while Teams 2 and 3 seemed more focused on using the kit to evaluate themselves through the quality of their work. Thus, perceived usefulness (Q4, Table 4) was very different. For Q5, regarding the nature and order of the proposed tasks, the teams agreed on a way to proceed since the kit itself does not impose a specific scripting strategy.

Regarding the comparison with existing tools, almost all members of the three teams were at least familiar with the ABC Learning Design kit, while the Learning Battle Cards kit was less well-known. Those trained in ABC Learning Design, especially in Team 1, had never tried to model an existing course with the tool and therefore could not compare its uses. Regarding Q6, and the relevance of the timing for using the tool, all teams managed to model almost the entirety of their MOOC in 3 h. Only Team 1, whose members debated a lot about ideas for improvements as they went along, had not completely finished modeling the last session, even though, retrospectively, it was the smallest MOOC to model.

Regarding usability, the tool appeared particularly suitable (Q10) for this audience, almost exclusively composed of instructional designers. In our study, the use of the different parts of the kit was carried out without real difficulty, and it was not often necessary to intervene to remind them of the training on the tool during modeling (Q13). As with Q5, question 11 – concerning tool flexibility and need to be adapted or modified by users – was heavily influenced by group discussions. For example, some team members took the initiative to create and share mind maps with the group to organize their ideas. Some wanted to arrange the tables in the room differently to better visualize and discuss the kit’s modeling. A member of Team 1 used extra sheets of paper to write down learning units contents. Indeed, in version 1.7, it was not yet possible, unlike in the current version (2.4), to directly write information on the main pieces.

Nevertheless, despite a generally positive assessment, several difficulties arose with questions 12 and 14, which concern understanding the kit’s functioning and the workload to use it, respectively. For many team members, the initial moments of discovering the kit were met with high apprehension. Indeed, the number of kit parts and the fact that the modeling takes the form of a code that initially had no meaning for them was a real cause for concern. Some Teams 2 and 3 members said their first impression was that the tool was too abstract and not user-friendly. According to them, this feeling was amplified by the kit’s appearance, which resembles a board game, though it is not one. It was only at the end of the training, after assimilating the new language, that this negative feeling largely dissipated. However, in Team 3, some wished to see the kit evolve towards a more playful approach, while in Team 2, some wanted concrete examples of activities to help them find new ideas.

Regarding the workload to use the tool (Q14), it is important to note that training is mandatory to use the kit, which in our context was not an issue since it was part of the evaluation process. However, some Teams 2 and 3 members expressed concerns about the kit’s use by other professional categories. They found some aspects of the kit too complex, particularly the use of multiple taxonomies (cognitive, motor, emotional), as well as the time required for modeling. They felt it would not be compatible with the busy schedule of a secondary school teacher, who must prepare numerous lessons, as it would require too much training and preparation time. Conversely, they noted that such a tool would be indispensable in complex training contexts such as MOOCs, to create and analyze their future productions.

Finally, at the end of the various modeling sessions, during the kit’s storage phase, all three groups complained about not being able to save their work digitally. They pointed out that having to erase, store, and then potentially reproduce the same modeling was time-consuming and a waste of time, besides. They also mentioned the need for enough space in their office to leave the modeling in place if they had neither time nor wish to store it. Conversely, some members saw that as an extra advantage, as it allowed them to keep the training in view to discuss it with colleagues or return to it more easily and regularly. Others directly came up with the idea of photographing their work to save it and then exchange it with colleagues in digital form.

After detailing the main results of this research and development, the next chapter will be devoted to discussing the advantages and limitations of the scripting kit and its methodology.

6 Discussion

In this article, we have highlighted that some designers prefer using simple Post-it notes for collaboratively designing their courses, despite the availability of digital and analog tools that could assist them in so doing. By analyzing existing analog tools, we found they do not allow their users to engage in an inductive logic approach. To address this gap, we undertook research and development to create a new analogic tool aimed at facilitating an inductive approach, encouraging collaboration, and optimizing the instructional design process. Several prototypes of pedagogical scripting kits were developed, along with a new scripting methodology centered on the concept of bricks. We then studied the kit usefulness, usability and methodology through a qualitative study involving three teams modeling their MOOCs.

In this chapter, we will leverage the modeling of the 3 MOOCs by different teams to discuss the intrinsic value and limitations of using the scripting kit to describe such courses. From a methodological perspective, we will then discuss the results of the empirical study on the kit usefulness, usability and methodology by teams. Finally, we will see that it was possible to extract several regularities from this modeling, allowing us to tentatively envision a new Learning Design theory centered on the concept of bricks.

6.1 Advantages of and limitations to the use of the scriptwriting kit when describing the 3 MOOCs

In this research, three MOOCs were modelled in their entirety by the design teams, using Eduscript Doctor (prototype v. 2.4). This tool’s modelling principle is to represent all the theoretical learning paths composed of various activities and that can also be in asynchronous training, such as optional or compulsory MOOCs, be they linear or not. In the MOOCs studied, learners can choose: either move from one activity to another as they wish, although a specific path is provided for them. The journey is therefore not an exact reflection of learners’ navigations and therefore of their actual learning experiences. Their experiences can vary, depending on their needs, learning pace, interests, and interactions with content. This difference between modeling a virtual learning path and the actual learner’s experience may be seen by some as a limitation of this type of modeling tool. However, we should not confuse modelling multiple paths with navigation once again. Sometimes, different paths are planned by designers, whether they are alternative, own choice, progressive, complementary, or even optional. In this case, it would have been necessary to represent them in their entirety with the kit, using new aggregates in the modeling. In the 3 MOOCs studied, a single pathway is offered to all learners. However, other choices could have been made by teams such as hybrid pathways, remedial pathways after a diagnostic assessment, differentiated pathways with micro-credentials, etc.

In order to model training courses with the kit, we adopted an approach that breaks down the content into sequences, sessions and Learning Units, using a granularization method. This terminology is commonly used to structure courses in the French education system, which vindicates its extensive use. However, as demonstrated by the examination of the three MOOCs studied, these terms do not always correspond to the terminology commonly used by training designers. Thus, it is relevant to perceive this division into Sequences, Sessions, and Units rather as a practical strategy to artificially segment content. Segmentation is essential for encoding and describing the various contents and relationships between different activities but is not truth in itself.

Regarding the formulation of training objectives, we have, in the kit, identified a description to the effect that they can be set at three levels (Chap. 3.2). First, we have an overall definition of the objectives for the entire sequence, which in this article represents MOOCs in their entirety. Then, we defined session objectives, which mark the first step of dividing the content into aggregates. Finally, we set pedagogical objectives for each Learning Unit. In the latter case, assigning learning objectives to short Learning Units helps steer the design towards optimized microlearning, also known as Nuggets Learning, providing a short, focused, and effective educational experience (Burguete & Urrego, 2023). Although this requires an investment in time at design stage, it may be a good idea to set learning objectives in advance for at least two of the initial levels. This improves accuracy, allows for better planning of progress, and most importantly, informs learners of their goals, which tends to increase their engagement. In the case of these three MOOCs, the objectives of the two lowest levels (sessions and Learning Units) were sometimes a posteriori defined, using the tool during modeling by the design teams. Although artificial, this approach is nevertheless useful in the reflexive analysis to highlight designers’ educational strategy during the course design phase, even though learners have not been informed of it.

Modelling the 3 MOOCs makes it possible to know the nature of the activities offered in the courses, whether active or passive. Because of the codes that identify each of the activities, we can see the missing or existing links between the course different parts. For example, we realize the content is always new in all 3 MOOCs and that there are no activities outside the summative assessments that invite learners to carry out retrieval efforts. From the Call or Recall pieces (Appendix 1 – Table 8, ID 1 and 2), we can also see there are resources that are little or not at all exploited in training, such as those related to the transcriptions highlighted in MOOC 3 (Sq1S3U10, Fig. 17) or the forum in MOOC 2 (Sq1S1aU4, Fig. 16). Regarding the content offered, we can see training course modeling is not limited to identifying only directly educational content, but also integrates links with other resources such as emails (Sq1S1aU2, Fig. 16), certificates or open badges (Sq1S7U3, Fig. 20), a Padlet (Sq1S1aU9, Fig. 16), social networks (Sq1S3U3, Fig. 17) or any other object, whether real or symbolic. All this information makes it possible to visualize community management strategies and identify or plan precisely which parts will be evaluated or highlighted (emails, teasers, social networks, remediation, etc.). To do this, as in MOOCs 2 and 3, it is relevant in the same session to separate the elements that correspond to the course from those that fit into it into aggregates. This is why sessions a and b have been artificially created to bring better visibility to the different joints. Thus, while the course activities appear isolated in the modeling, frequency (numbers) analysis, activities nature (red or green color coded) and their regularity (pattern) becomes easy to carry out. If red is in the majority, we can deduce that the course is more resource-oriented. Conversely, if green is the majority, we will conclude the course is more activity-oriented. Likewise, with the help of numbers, we will be able to visualize the varied or monotonous nature of the content offered. However, in the context of a course focused on (red) resources such as instructional videos, for example, it will always be possible to offer learners a strategy to turn them up into spectators (seated surfers) or into actors (surfers standing in a wave) and therefore act on their behaviors to foster engagement. Because of the main pieces color coding, we can see, for example, that there is no summative assessment in session 1b of MOOC 2 (Fig. 12), whereas there is systematically one in all the other sessions. Learners might conclude this training week should be considered as less important than the others. In addition, regarding the nature of the activities and their positioning within the course, it should be noted that in MOOC 3, comic strips aimed at arousing learners’ emotions and questions are placed at the end of each session, after the course main content. Modelling raises the relevant issue of whether it is possible to move these comic strips up to the sessions beginning, to allow learners to better benefit from the teaching that will follow. That way, learners would have the opportunity to find answers to the questions raised by the comic strips and could also discuss them afterwards with their forum peers.

Regarding the overview of learning activities (Appendix 1 – Table 6, ID 3), which is used to identify the various passive and active activities, please note it is not the number of activities that is proof of diversity in a training course, but rather the frequency of use of these very activities. Indeed, though we can see there are many different activities in all 3 MOOCs, the educational video is the most commonly used. It is therefore essential to properly analyze the modeling with the color codes and numbers of the different activities as well as the time or pace the different activities pop up at.

Although it does not have to be completed, the code dictionary (Appendix 2 – Tables 9, 10 and 11) has the added benefit of making it possible to check whether some activities are too long and whether their completion time is compatible with learners’ learning time. In MOOC 1, we see the activities total duration corresponds to the training duration announced by the design team, which justifies considering the MOOC more as a resource library than as a course per se. On the contrary, the duration of MOOCs 2 and 3 courses is multiplied by three by teams to integrate the learning time. This ratio of 1 to 3 is often used empirically by designers to calculate an online course duration.

Now that we have detailed and discussed the results of the different sorts of modeling, we can move on to discussing the usefulness and usability of the scripting kit as perceived by teams.

6.2 Kit usefulness, usability and methodology assessments

The results of this qualitative evaluation are generally positive in terms of both usefulness and usability. However, these results should be considered cautiously due to a major selection bias among the study participants. Most participants were instructional designers, likely more willing, motivated, and interested in the pedagogical scripting tool than other professionals randomly selected from other universities or training centers. It is important to note that the kit is also intended to be used individually or in teams by trainers who are not specialists in instructional design or by students in training in this field. Therefore, further studies will be necessary to assess its use in more varied contexts.

Moreover, due to an insufficient budget, particularly for manufacturing and providing an adequate number of kits for an extended period, our study is hampered by its small number of participating teams and the inability to conduct long-term follow-up. This limitation prevented a thorough evaluation of the kit’s usefulness and usability, as the assessment was conducted at a specific moment in time.

Regarding criticisms about the kit apparent complexity, we believe this is primarily related to its inductive nature, which is inherently less informative. As a result, in the training that precedes the use of the tool, we chose to first explain the granularization and microlearning concepts before introducing the tool. Future studies will need to confirm that this strategy effectively alleviates initial apprehensions about the tool.

Finally, regarding the storage of the kit after use, significant efforts have been made with version 2.4 to facilitate the storage of the pieces in the box. However, it still takes some time to erase the writings and store the pieces. We believe that an alternative digital version of the tool on a computer might not be a good idea however, as it would remove the tangible and possibly collaborative aspects that are the tool’s strengths, similar to the practice of using Post-its. Nevertheless, a digital version that complements rather than replaces the tool could capture the scripts and, in some cases, allow them to be further developed on a computer or whiteboard. This would eliminate the need to photograph scripts to save them.

As a conclusion to these last two parts concerning the 3 MOOCs modeling and discussing the results of the qualitative study as to the kit usefulness and usability, we can validate at least the first of our two main hypotheses. Indeed, the kit has proved to be useful as it “allows a clear visualization of the structure of a MOOC, thereby facilitating the evaluation of its pedagogical design”. However, although the kit was widely usable by the educational teams, our qualitative data do not allow us to assert that the kit is suitable for an audience of instructional designers due to the selection bias associated with team recruitment and the absence of quantitative data. Thus, further studies will need to confirm this last hypothesis.

Now that we have discussed the results of all sorts of modeling and teams’ feedback on the kit usefulness and usability, the next sub-chapter will highlight the need to develop an abstraction of the brick-centered scripting methodology in the form of a Learning Design theory.

6.3 Regularities and possible abstraction of results towards creating a learning design theory centered on the concept of bricks

Improving the kit methodology during R&D and using it during this experimental study, let us highlight several design regularities around the brick concept. We were able to define 8 general principles that enabled us to raise the possibility of going beyond the threshold of a simple methodology towards the more abstract one of a Learning Design theory centered on the concept of bricks. In our next investigations on the kit, we plan to continue our reflection on this theory, since we recognize it still warrants many tests and adjustments, indeed. By doing this, we will be able to further perfect the scriptwriting methodology used in Eduscript Doctor, and even develop new tools. Here are the premises of a new theory presented in the form of 8 major temporary principles that allow for an abstraction of the scriptwriting methodology based on the concept of bricks.

  • 1st principle: the brick as a training unit

    All teaching and learning devices and, more broadly, all formal, non-formal or informal educational experiences can be represented in the form of one or more isolated building block(s) or in the form of aggregates.

  • 2nd principle: the brick as a container with its contents

    As a container, one or more brick(s) can act as the support of all existing pedagogical models, in an exclusive way or by associating them. The container can be “filled” with educational content, but it can also be empty – without educational content.

    Content can be predetermined by the designer and team, developed by the learner autonomously or collaboratively, co-designed by all stakeholders, or imposed on learners in an unprogrammed educational situation. To this end, the content of the brick can be at least a resource or activity in which the learner(s) will behave either actively (as spectators or actors), or passively, if they feel no need or desire to learn.

  • 3rd principle: The brick can be self-contained

    When a brick is an autonomous educational object, it is self-sufficient. In this context, it must proactively or retroactively provide at least one targeted pedagogical objective. It may or may not be linked to other bricks.

  • 4th principle: The brick is identified to become unique

    Each brick (self-contained or not) will need to be identified with a code, so it can be described. On the one hand, this will make it possible to identify this brick in space and time during a script, but also to be able to reuse it as a Teaching and Learning Resource (TLR) or as an Open Educational Resource (OER).

  • 5th principle: Bricks can be connected to other bricks

    When several bricks appear in isolated forms or, on the contrary, in the form of one or more aggregates, the identification of each of the bricks with a code makes it possible to make links between them as a “call” from one brick to another or as a “recall” of a previous brick. A code can also be used to identify bricks aggregates, to make connections between an aggregate and a brick or between a brick and an aggregate.

  • 6th Principle: A real or symbolic brick

    A brick can represent a partial or complete teaching and learning situation as well as any concrete or symbolic object such as a tool, an instrument, a strategy or a concept, a theory. In this second case, the brick will necessarily be represented in isolation, but it will necessarily be related to another brick.

  • 7th Principle: Brick duration

    Not all bricks have a duration, as they can contain concrete or symbolic objects that will be integrated with a link into another brick. However, all other bricks that are activities or resources will have durations that can be defined either from an average activity or consultation time, or from an estimated presence time or, then again, an estimated learning time. Depending on the learning situation, a brick may have various durations, either minimum, or maximum, or fixed or variable.

  • 8th Principle: Designing the content of a brick

    The use of the brick concept does not necessarily imply dividing disciplinary content or defining pedagogical objectives. The design can be carried out by at least one teacher or trainer (top-down approach) or by at least one learner (bottom-up approach). In addition, a brick can represent either massed or distributed learning, depending on bricks durations and numbers.

7 Conclusion

This study has explored a new methodological approach of pedagogical scripting by highlighting the use of a pedagogical scripting kit, called Eduscript Doctor, to support it. Through the process of creation and analysis of this new methodology, we examined the benefits and challenges as to applying it, especially in the context of the 3 MOOCs studied. The empirical evaluation of the kit revealed that it was generally useful and usable by design teams in the context of MOOCs, offering tangibility, practicality, flexibility, and reflexivity in the design process.

Although further studies in different contexts and possible audiences are necessary to confirm the possible uses of the pedagogical scripting kit, the initial results of this research lead us to believe that Eduscript Doctor is a promising solution. It proves to be both concrete and pragmatic in terms of the design and analysis of educational scripting, valuable for both practitioners and researchers in the field of education and training sciences.

The methodology proposed in this article focuses on the notion of “brick”. It also presents an interesting perspective for the design and planning of any type of teaching and learning method and material outside the pedagogical scenario kit. It has the advantage of freeing itself from the constraints of duration and size associated with the concept of microlearning, while preserving the essential characteristics of its architecture. The Eduscript Doctor kit, although in analogic form, proved invaluable to the design teams of the three MOOCs presented, providing essential tangibility, practicality, flexibility and reflexivity in the design process.

A number of regularities have emerged from this empirical study, which suggests the possibility of producing a new theory of Learning Design that would allow, once fully validated, to improve the power of the tool even more thoroughly.