Keywords

10.1 Context in Experimental Studies

CCI and learning technology research is commonly constructed in the interplay of actors (learners, children, teachers, and parents), activities, and technology. It is informed by theory, conducted following experimental research methods, and reflects on epistemological stances. In most cases, the research is situated in particular contexts, which may be cultural, technological, infrastructural, or organizational. With the ultimate objective of making discoveries and contributing new and valid knowledge, the research can have significant implications for how people live and learn in technology-rich environments. Therefore, the knowledge obtained needs to be relevant and useful (i.e., contextualized) (Davison & Martinsons, 2016), as well as carrying a certain degree of validity (i.e., generalizability) (Cheng et al., 2016).

Validity in research is often referred to in terms of generalizability and universalizability. A definition that is easy to understand and adequate for the learning technology and CCI fields calls validity the “act of arguing, by induction, that there is a reasonable expectation that a knowledge claim already believed to be true in one or more settings is also true in other clearly defined settings” (Seddon & Scheepers, 2012). In learning technology and CCI, researchers may generalize knowledge in various ways, such as from one learning design to another, from one educational level to another, from one culture to another, from one context to the development of a new theory, and from one context to the extension of an existing theory. The same is true of other human-factors IT-related fields (Lee & Baskerville, 2012). However, an important question that is often posed in those fields is whether validity can reasonably be expected to extend to other contexts, given the well-defined contexts in which most research is conducted (Davison & Martinsons, 2016; Cheng et al., 2016).

The importance of contextualization and generalizability (sometimes referred to as particularism and universalism) has been extensively debated in several research fields (e.g., Deaton, 2010; Lee & Baskerville, 2012; Davison & Martinsons, 2016; Cheng et al., 2016). There has been a similar discussion in the field of learning technologies and HCI, with some studies focusing on achieving generalizability of their results (Sao Pedro et al., 2013) and others on producing contextually rich findings (Ferguson et al., 2014). There is general recognition that knowledge comes in various forms, ranging from highly general knowledge (e.g., universal laws) to highly contextualized insights (Höök et al., 2015; Höök & Löwgren, 2012). In addition, in the field of learning technologies, there are subcommunities (e.g., LAK and EDM) that adopt different stances and observe nuances in those two notions (Siemens & Baker, 2012). Regardless of one’s stance toward those two very important notions, it is generally agreed that context matters in learning technologies and CCI research, and the importance of generalizability should not be downplayed. Researchers need to understand the research context fully, as this, in combination with replication and triangulation, can contribute to the (cautious) construction of intermediate- and higher-level knowledge (Polit & Beck, 2010) of how humans learn, play, communicate, and live in technology-rich environments.

Because of the data-intensive nature of contemporary research and its focus on interventions (e.g., collecting LMS analytics, as opposed to older relatively static approaches such as end-of-treatment surveys or interviews), the notions of contextualization and generalizability are of particular importance. Contemporary data collection has the capacity to bridge those two notions by reinforcing their complementarities, rather than contributing to a debate that treats them as two antagonistic notions. In particular, the capabilities of automated data collections (Sharma & Giannakos, 2020) afford a high degree of context-awareness (e.g., GPS, motion trackers, and accelerometers) and generalizability (e.g., measures with high internal and external validity, such as eye-tracking). Seminal work (Sharma et al., 2020) has provided evidence of the ability to support both context-awareness and generalizability. Therefore, learner and user analytics have the capacity to empower researchers to focus on the degree of contextualization and generalizability that is appropriate for the type of knowledge or theory they want to develop. Nevertheless, it should be emphasized that researchers must give explicit consideration to their research design, the details of the context in which the research will be conducted, and the contexts for which the findings may reasonably be relevant and useful.

10.2 Ethical Considerations

Ethical considerations are always relevant and mandatory for any human-factors IT-related research (as well as any research with human subjects in general; Belmont Report, 1979). The Norwegian National Committees for Research Ethics provide four general principles for conducting researchFootnote 1: respect (participants shall be treated with respect), good consequences (researchers shall seek to ensure that their activities produce good consequences and that any adverse consequences are within the limits of acceptability), fairness (research projects shall be designed and implemented fairly), and integrity (researchers shall comply with recognized norms and behave responsibly, openly, and honestly toward their colleagues and the public). For experimental studies, these principles are of paramount importance, since researchers may willfully manipulate the independent variable with the goal of observing a change (Shadish et al. 2002). As per the European Commission’s report on Ethics for Researchers (European Commission, 2013), three main hallmarks of ethical research underpin the notion of “informed consent”: adequate information (being provided with all the necessary information), voluntariness (agreeing voluntarily to take part), and competence (being capable of grasping fully the potential risks of participation).

Ethical and methodological considerations are central when designing an experiment. For example, measures used to increase validity (e.g., deception of participants by using cover stories to orient them away from understanding the RQs) have been criticized (große Deters et al., 2019), as have approaches that seek to increase ecological validity by waiving informed consent (Grimmelmann, 2015). Nevertheless, there is a consensus that explaining and debriefing participants after the experiment is mandatory in all cases (Belmont Report, 1979). Today, the involvement of and approval from an independent ethics committee is mandatory before conducting experimental research. Different countries employ different approaches on how to form and include ethics committees. For instance, some countries have Institutional Review Boards (IRBs), whereas others have national review boards. Nevertheless, today there are established institutional, national, and international regulations, such as the EU’s General Data Protection Regulations (GDPR; https://gdpr.eu/), which provide guidelines for human-factors research such as in the fields of learning technology and CCI.

In the context of digital learning and learner-generated data, we have seen a number of endeavors and tools during the last decade. For instance, Slade and Prinsloo (2013) introduced a framework with a focus on ethics in digital learning and learning analytics. Other notable contributions are the JISC code of practiceFootnote 2 and the DELICATE framework (see Drachsler & Greller, 2016), which are useful tools to support learning technology research and practice. More recently, the International Council for Open and Distant Education (ICDE) produced a set of guidelines for ethically informed practice that is expected to guide research in digital learning and learning analytics across the world (Slade & Tait, 2019). In summary, the main ethical considerations in relation to learner-generated data can be grouped into the following categories.

  • Privacy considerations: how personal data is being observed and protected from unauthorized use. Practices of un-linking linked data, anonymization, and codification are often used (when possible).

  • Data ownership considerations: information about the ownership, use, and distribution of data. This is another important consideration that protects participants’ rights, for example, by ensuring that data will not be passed on or used for unintended additional purposes.

  • Consent considerations: mandatory provision of documentation that clearly describes the processes involved in data collection and analysis. Consent must be received from each individual participant (or, in the case of children, assent from the individual and consent from the legal guardian) before any experimental study.

  • Transparency considerations: providing the necessary information and being transparent with respect to which data will be collected, why and how they are going to be analyzed, and under what conditions.

Although this point has already been mentioned, it is important to emphasize that some categories of participants require special attention.

  • Children. This is the most relevant category for this chapter, since it is central to CCI as well as to learning technology (e.g., in K-12 education). The European Commission’s report on Ethics for Researchers (European Commission, 2013) clarifies that when children are involved in research, care and consideration are pivotal. In addition, it requires a clear justification for involving children in research (“the involvement of children in the research must be absolutely necessary and, if so, all particular ethical sensitivities that relate to research involving children must be identified and taken into account”) and provides a detailed section on the use of children in research in the European Textbook on Ethics in Research (European Commission, 2010, pp. 65–74).

  • Vulnerable adults. This category includes, but is not limited to, elderly people, people with learning difficulties, and severely injured patients.

  • People from certain cultural or traditional backgrounds. In some communities, notions of individuality, written permission, or written agreement do not exist, and certain groups (such as women) may not be permitted to act autonomously. In such communities, the European Commission (2013) clarifies that “strategies must be developed to address these issues with respect for the specificities of the situation.”

In addition to these categories that clearly require special attention, it is important for the researcher to consider any potential unequal power relationships (Levine et al., 2004). For example, students, teachers, and children might find themselves in a situation where they experience discomfort (e.g., having to act in certain ways in front of their teachers or parents) or even disadvantages (e.g., being socially excluded if they decide not to participate in the study). This puts the voluntariness of their participation in question, and researchers should take all appropriate measures to avoid potential negative effects of participation, emotional stress, or other discomfort (große Deters et al., 2019).

In recent years, we have seen extensive discussion on ethical challenges in the design and use of interactive technologies for children (Hourcade et al., 2017). We have also seen a slight shift in publication venues with respect to ethical considerations in human-factors IT-related research. For example, most publication venues do not require an ethical statement from the authors, which means that ethical issues experienced during the studies may not have been properly reported. However, some publication venues, such as the IDC and IJCCI,Footnote 3 now require a dedicated section (called, for example, “Selection and participation”) in which the authors of the paper describe how the participants were selected, what assent/consent processes were used (i.e., what the participants were told), how the participants were treated, how data sharing was communicated, and any additional ethical considerations. Although the introduction of such a mandatory section in research papers (as with any other regulation-driven checkbox exercise) cannot enforce in-depth consideration of the potential ethical challenges that might emerge from experimentation, it definitely helps by providing a baseline and a certain level of awareness in research communities.

Before closing this subsection, it is important to note that important issues such as children’s privacy, AI, social media, and media sharing have not been extensively covered in this book, owing to limitations of space and scope. However, we would like to bring out some of these issues in this final paragraph. Today’s children are growing up with technologies that use sensor data and data-driven interactions (e.g., multitouch technology and motion-based technology). Their dispositions over the use of their personal data (e.g., voice interfaces and other affordances that rely on biometric recognition) might be different from those of adults. Therefore, these technological advancements pose fundamental questions as to which technological futures we should be developing and how we face and mediate ethical issues and dilemmas when doing research or designing technology to support children’s learning, play, and living (Antle et al., 2021; Eriksson et al., 2021). Contemporary technologies are often “invisible” (e.g., ubiquitous systems), and their intelligence is fueled by unconsciously produced data and sophisticated AI techniques that evolve continually and are in daily use. Future work should consider the ethical issues and dilemmas that emerge from this, and we must proceed with care and responsibility around the potential implications of our research designs, methods and practices, and the resulting technologies.

10.3 Working with Children

Researchers conducting experimental studies with children might be required to employ different methods, approaches and techniques, as observed in much of the CCI research literature and a recent dedicated chapter (Markopoulos et al., 2021). Nevertheless, we would like to offer a summary of the motivations for and importance of employing child-centered approaches that focus on individual abilities. One example is the use of a traditional verbal questionnaire; such an instrument assumes that respondents are able to think abstractly about their experience. However, children younger than 12 (i.e., those in middle childhood, or at the stage of concrete operations in the Piagetian tradition) have not yet developed these skills; instead, their thinking processes are based on mental representations that relate to concrete events, objects, or experiences. This must be taken into account when adapting the measurement method to the level of cognitive development of the child participant. Following this line of reasoning and related work in child development and psychology (Harter & Pike, 1984), most CCI research methods (e.g., smileyometers and fun sorters; Read & MacFarlane, 2006; see also Fig. 10.1) use visual methods (or observations and qualitative, checklist-based measurements), which we know are more effective than verbal methods (Döring et al., 2010). Such visual analogs represent specific situations, behaviors, and people to whom the child can easily relate.

Fig. 10.1
An illustration with various smileys indicating different actions along with drawings for best and worst in a single row.

Top: The Smileyometer, a Likert-style visual analog scale (VAS) that was designed with the help of children. Bottom: A completed fun sorter, which allows children to rank items against one or more constructs. (From Read & MacFarlane, 2006; with permission by ACM)

Besides the actual instruments used, it is important for CCI and learning technology researchers to consider potential collusion (e.g., when administering questionnaires to a group of children in one place). When it comes to open-ended questions and embodied communication, it is likely that the researcher will be unable to work out what all the words and body signals mean. Moreover, some children will choose to skip some tasks, not follow the depicted usage scenario, or not answer all the questions. This often happens in CCI research, and it is important for the researcher to be able to orchestrate the experiment in real time while considering potential reasons and interpreting the results accordingly. Potential complications can be that the children are tired or bored, they cannot read or understand the question, they do not know the answer or how to write it, or any combination of these reasons (Markopoulos et al., 2021). In recent years, we have seen a plethora of tools used to collect children’s opinions and experiences (e.g., the Fun toolkit and laddering). There are also different ways to adapt or modify an instrument from research with adults so that it can support CCI research with children as participants.