Keywords

3.1 Design of Educational Interfaces

User interfaces are an inherent part of any technology with human end-users. The role of an interface is to facilitate efficient communication and information exchange between the machine (the technology) and the user (the human). User interfaces (UIs) rely on what we call “interface metaphors,” sets of visuals, actions, and procedures incorporated into the UI that exploit specific knowledge that users already have of other domains, such as their homes and working environments. The use of proper interface metaphors allows users to predict the functionalities of each element of the interface (metaphor), resulting in more intuitive use of the interface and more predictable system behavior. Confusion is avoided, as there is no need for explanations of the various elements of the UI, and users are aware of the impact that their actions will have on the system. A time-tested example is the “desktop” metaphor, which portrays the operating system as similar to objects, tasks, and behaviors found in physical office environments (Neale & Carroll, 1997).

The appropriate selection and application of UI metaphors make systems easy to use, and so we need to understand how metaphors are perceived by our targeted end-users. Good understanding will allow us to incorporate metaphors efficiently into our UIs. Below, we provide some commonly used metaphors that allow UI designers to develop intuitive interfaces. As technology advances and different applications are developed (including new ways of working, living, learning, and communicating), new metaphors need to be established to increase the usability of those applications. The examples show the centrality of metaphor to UI and the importance of drawing on real-world analogies (Table 3.1).

Table 3.1 Some commonly used UI metaphors

The selection of metaphors and the design of the UI depend heavily on the intended end-user and are therefore extremely important for research in both learning technology (where the learner is the end-user) and CCI (where the child is the end-user). For example, a straightforward note-making metaphor (e.g., for presenting new information) might be good for a technology that targets teachers but less effective for a technology that targets doctors (where a Post-it metaphor might work better). The same applies to all user groups, although learners and children are particularly interesting end-users. Learning is not always an easy process. It is associated with many aspects of interaction and cognition (including difficult mental operations and cognitive friction), and these differ across the different developmental phases of a child. For instance, for very young children, even time-tested metaphors such as “desktop” can fail to convey the intended information. Therefore, it is important to work closely with the end-user to develop an appropriate set of visuals, actions, and procedures that can be incorporated into the UI to achieve the intended objectives (see for example, Fig. 3.1, Asheim, 2012; Høiseth et al., 2013). Moreover, learning takes place in and across diverse contexts (e.g., online or in classrooms, labs, and maker spaces), and the content area (e.g., math, language, or art) plays an important role in the mental models generated by the user during learning and the ways in which those models need to be taken into consideration to facilitate learning.

Fig. 3.1
figure 1

Examples of how to achieve objectives by using familiar and repetitive elements (adopted from: Asheim, 2012; Høiseth et al., 2013, with permission by Asheim and Høiseth)

The main focus of metaphors is ease-of-use, usability, and utility for representing a system’s functionality (Kuhn & Blumenthal, 1996). However, the capacity of UI metaphors to facilitate learning has been historically recognized and valued by both the learning technology and the HCI communities (e.g., see Carroll & Mack, 1985; Neale & Carroll, 1997). Metaphors facilitate learning by leveraging existing mental models or previously learned information and applying them to new contexts (Bruner, 1960). Learning is accelerated when metaphors are used, because they draw on existing knowledge bases to reason about new problems (Streitz, 1988).

Contemporary research and practice recognize the importance of iteration and end-user participation during the UI design (e.g., DiSalvo et al., 2017). Processes from HCI, such as rapid prototyping and low-fidelity paper prototyping (Wilson & Rosenberg, 1988), are commonly used in educational UI. Those practices are advantageous because of their simplicity, low cost (no need for many working hours or materials/tools), and the ease of obtaining early feedback from the end-user. They also adopt the main steps of established instructional system models, such as ADDIE (analysis, design, development, implementation, and evaluation) (Branch, 2009), which allows the necessary steps to unfold iteratively. The powerful progression from a low-fidelity, pen-and-paper prototype to a working system is shown in Fig. 3.2 through two examples, one on the development of a UI for a multi-touch gamified quiz system that supports learning in museums (Noor, 2016), and one on the development of a UI for a self-assessment technology that supports online learning (Westermoen & Lunde, 2020). As the figure shows, the initial low-fidelity ideation is created using only pen and paper. The sketches are very basic, but they also useful for determining how the user will interact with the interface; because the sketches in this phase of the design are low-fidelity, it is easy and “cheap” to change them. After the first iteration, some of the features are developed and tested with a few end-users, but even then it remains easy to test the basic functionalities and accommodate the results from the testing (e.g., in terms of metaphors used, information visualized, and actual functionalities). As the fidelity of the interface increases and more interactive functionalities (and the respective wireframes) are incorporated, it becomes more difficult and costly to accommodate structural changes. In the final stages of the process, we have a working system that can be tested through experimentation.

Fig. 3.2
A chart lists a workflow process from a low fidelity pen and paper prototype to high fidelity interface wireframes to a working system.

The process from low-fidelity pen-and-paper prototype to a working system. Left: The development of a UI for a multitouch gamified quiz system that supports learning in museums. (From Sharma et al., 2020; licensed under CC BY-ND 4.0). Right: The development of a UI for self-assessment technology that supports online learning. (From Westermoen & Lunde, 2020, with permission by Westermoen and Lunde)

Within the progression from low fidelity to high fidelity and ultimately the complete UI, the designer needs to make progress in the development of the navigation thread. The storyboarding/navigation thread will cover all the possible use cases and scenarios and the interconnections within the wireframes. Figure 3.3 shows the storyboarding of a self-assessment UI (adapted from Westermoen & Lunde, 2020). During the design of the educational UI, the designer needs to keep in mind who the intended end-users are (e.g., children, other learners); what their characteristics are (age, background knowledge); the expected objectives (learning goals, competence development); the different types of constraints (learning constraints, technological constraints, teachers’ competence); the delivery options and expected role of the technology; and its pedagogical underpinning. In addition to answering these very important questions, the UI designer needs to be able to gather information from end-users and test their ideas.

Fig. 3.3
A chart lists the steps involved in a self assessment U I. The flow starts from the main navigation thread and continues to the dashboard.

Storyboarding of a self-assessment UI. (Adapted from Westermoen & Lunde, 2020; with permission by Westermoen and Lunde)

As a result of the iterative design process and storyboarding, a collection of wireframes representing each possible view a user might encounter is created. The final UIs need to consider the context of use and provide the necessary guidelines for the implementation of the application. Figure 3.4 shows an example set of UIs in the context of mobile learning in higher education. (More information about this example can be found in Pappas et al., 2017 and Cetusic, 2017)

Fig. 3.4
A list of the transformation interfaces of several mobile apps.

Final user interfaces (wireframes) of a mobile learning application. (Adapted from Pappas et al., 2017, with permission by IEEE)

3.2 Artifacts and Treatment Design

One of the first notions the researcher needs to understand in learning technology and CCI research (and also in neighboring fields) is the unit of analysis (UoA). The UoA is the object that is the target of the experimentation (and whose data we use as a unit in our analysis). The UoA can be an individual, a small group, an organization (a school), or the users of certain technology. For instance, if we are interested in studying the effect of the use of a dashboard or a representation (an avatar) on students’ learning outcomes or attitudes, then the UoA is the student, since we will use the score of each student. If we want to study the introduction of novel technology to support collaboration in dyads or triads, the UoA is the dyad or triad, since we will use the score of each dyad or triad (e.g., scores from a common assignment). Even objects can serve as a UoA; if we want to make an interface more attractive to students, then the UoA is the group of students who use the interface. Identifying the UoA can be complex, as it is not always static. It is common in a study with a specific dataset to have different UoAs. For example, an analysis of student scores can be based on the scores of individuals, of classes (if we want to compare the practice of different teachers), or of different groups.

Another important concept that is a cornerstone in learning technology and CCI research (and also in neighboring fields) is that of “artifact” (or “artefact” in British English spelling) (Carroll & Rosson, 1992). Artifacts correspond to novel designs (which may be prototype systems, interfaces, materials, or procedures) that have a certain set of qualities or components (such as functionalities and affordances) and that allow us to experiment (e.g., to isolate and test certain components). Such experimentation serves to advance both empirical and theoretical knowledge, but it also supports the practice of a user (such as a learner or a child) and empowers them to achieve their potential. Artifacts allow us to formulate the necessary conditions by isolating certain functionalities and testing our hypotheses through experimentation. Each experimental study has its own value and should contribute to the main body of knowledge by validly testing theories that are contingent on designed artifacts, or by producing findings that may be reused to support the design of future artifacts in the form of lessons learned or design implications (Sutcliffe, 2000).

Contemporary learning technology and CCI research focuses on conducting “artifact-centered evaluations” that use artifacts in the experimental process. The most common approaches cascade the experimentation process within a broader research procedure, with the intention of producing new knowledge and models and informing theories and practices. Such approaches inherit the characteristics of design research and are iterative. For instance, design-based research (DBR) is a common approach in learning technology, whereas the task–artefact cycle is commonly employed in HCI (see Fig. 3.5). Such research approaches are important, as they go beyond responding to a particular hypothesis, instead seeking to advance theoretical knowledge in the field by exploring and confirming various hypotheses and relationships in different contexts (see a representation in Fig. 3.6).

Fig. 3.5
A process where requirement analysis and theory and background knowledge are linked to design, which produces evaluations and outcomes.

Representations of common experimental processes. Left: Design-based research (DBR) research, which is commonly used in learning technology research (Barab & Squire, 2004). Right: The task–artefact cycle, which is commonly used in interaction design research. (Adapted from Carroll & Rosson, 1992; Sutcliffe, 2000). Both processes are iterative in nature and focus on producing practical and theoretical knowledge

Fig. 3.6
figure 6

Iterative design research focusing on producing empirical and theoretical knowledge

Going back to the important role of artifacts in conducting empirical studies, we now provide some examples of how artifacts allow us to move from observations to designing treatments and testing hypotheses. A very common interface in learning technology research is the dashboard. Dashboards are used in learning management systems (LMSs) such as Canvas, Moodle and Blackboard Learn, and also in the majority of learning technologies (e.g., digital games, educational apps). Although there are differences in the information included, the visualizations employed, and the moments at which the dashboard appears, most dashboards include information related to learners’ activity, progress, and learning history. The information provided to the learner (and teacher) is intended to activate their awareness, reflection, and judgment (i.e., metacognition), and ultimately to support their potential (by informing them about the amount of time spent on a task, the difficulty of a particular question, and so on). Providing this information in an efficient manner will support learners’ self-regulation and motivation, and teachers learning design and decision making, allowing them to make appropriate decisions about allocation of effort, time-management, and skills development (Lonn et al., 2015).

Figure 3.7 (up) shows a learning dashboard, taken from a previously introduced example (Westermoen & Lunde, 2020), this dashboard has been designed and introduced to support students’ self-assessment. The dashboard was introduced to one of two groups of students, and a mixed methods study was conducted to investigate the role of the dashboard in digital self-assessment activities (Westermoen & Lunde, 2020; Papamitsiou et al., 2021). Fig. 3.7 (down) shows a teacher dashboard, this dashboard has been designed and introduced to support teachers’ decision making (e.g., identifying students’ weaknesses and misconceptions, or students who need additional support). The dashboard was evaluated with experienced teachers to identify its usefulness and ability to support decision making and instruction (Luick & Monsen, 2022).

Fig. 3.7
Statistics of data from a student dashboard are represented in the form of a graph, charts, and percentages.

Up: Student dashboard with task-related analytics for each question. (From Westermoen & Lunde, 2020; with permission by Westermoen and Lunde); Down: teacher dashboard with task-related analytics for the whole class and course. (From Luick & Monsen, 2022; with permission by Luick and Monsen)

Another example is provided by artifacts that lie at the intersection of CCI and learning technology. To investigate the effect of avatar self-representation (ASR) (the extent to which the user/child is represented by an avatar) in learning games, we used three games that follow similar game mechanics but have a different approach to user ASR. ASR is classified as low, moderate, or high, according to the degree of visual similarity (i.e., appearance congruity) between the avatar and the user and the precision and breadth of movement (i.e., movement congruity). Figure 3.8 gives a detailed description of the ASR classifications and the respective game interfaces.

Fig. 3.8
Infographics that represent three different degrees of avatar self representation depict a control and two experimental groups.

Artifacts corresponding to three different degrees of avatar self-representation (ASR). (Adapted from Lee-Cultura et al., 2020, with permission by Lee-Cultura)

The group of children experienced all three ASRs (conditions), and during the treatment (i.e., a within-subjects experiment) we carried out various data collections, with the goal of determining the role of ASR in children’s affect and behavior in motion-based educational games. The results indicated that moving from low ASR (a cursor) to moderate ASR (a puppet) and then to high ASR (an image of the actual user) decreased users’ stress and increased their cognitive load (see Fig. 3.9). You can find the complete study, with all the details and results, in Lee-Cultura et al. (2020).

Fig. 3.9
Graphical representation of the result in the A S R study. The data is divided into three categories, namely, Low, Moderate, and High.

Indicative results of the ASR study. (Adapted from Lee-Cultura et al., 2020, with permission by Lee-Cultura). The blue bars show 95% confidence intervals. Statistically significant differences are marked with * for p < = 0.05, ** for p < = 0.001, and *** for p < = 0.0001

The use of artifacts is powerful, but it also has limitations. For example, the results are associated with the particular artefact under study, and any knowledge obtained is not necessarily reusable or generalizable to other contexts. Nevertheless, artifacts allow us to conduct experiments and test hypotheses efficiently so as to enhance relevant practical and theoretical knowledge. In addition, there are certain time-tested approaches in both learning technology and CCI/HCI (e.g., DBR; Barab & Squire, 2004) and the task–artefact cycle (Sutcliffe, 2000) that allow us to leverage iterative experimentation to go beyond context-specific hypothesis testing and produce reusable/generalizable knowledge.