Introduction

Recent advances in the study of self-regulated learning processes as events that temporally unfold in real time during learning and problem solving are transforming the fields of metacognition and self-regulated learning (SRL). New methods for detecting, tracking, collecting, and analyzing SRL data as events that have specific nonstatic attributes, such as frequency of use, duration, time-dependent patterns of use, and dynamics that include feedback mechanisms, offer novel ways to examine and understand the role of these processes across learning contexts, age groups, tasks, learning activities, etc. (Azevedo et al. 2010; 2013; Greene et al. 2011a, b; 2013). These novel methods can reveal important patterns of SRL events, based on the use of various types of data (e.g., utterances, conversational turns, log-files), that can significantly enhance our current understanding of the sequential and temporal nature of self- and socially-regulated learning (Azevedo et al. 2011a, b; Winne and Hadwin 2013). Therefore, these new methods, despite being exploratory in nature, have the potential to transform current conceptions of SRL by augmenting our models and theories of SRL by delineating microlevel processes (e.g., specific metacognitive processes) to existing theories and models that are either too abstract or focus on macrolevel processes (e.g., monitoring), and by generating testable hypotheses based on types of process data used and the results generated (Veenman et al. 2006; Veenman 2013; Winne and Azevedo 2014; Zimmerman 2008).

The five articles presented in the special issue of Metacognition and Learning, coedited by Inge Molenaar and Sanna Järvelä (this issue), offer an array of novel and sophisticated approaches to examine the temporal and sequential patterns of self- and socially-regulated learning processes. This special issue is timely and provides the interdisciplinary community of cognitive, educational, learning, computational, and instructional scientists the opportunity to examine the ways in which cutting-edge analytical techniques can be used to advance current stagnant methods and techniques (e.g., frequency analysis, dyadic state-transition analyses). The research community will benefit by employing the techniques described in the articles because they have the potential to transform contemporary conceptions of SRL (Hadwin et al. 2011; 2011; Järvelä and Hadwin 2013).

As such, the two main goals of my commentary of the five studies included in this special issue include: (1) summarizing each study, emphasizing key findings, and highlighting critical issues; and (2) raising issues, challenges, and questions related to conceptual and theoretical issues, methodological and analytical issues, and instructional issues.

Summary, Key Findings, and Critical Issues Presented in the Five Articles

The article by Kuvalja et al. (this issue) compares three methodological approaches used to analyze patterns of co-occurring, nonverbal behaviors and self-directed speech in 24 six-year-olds during a planning task. The authors use lag sequential analysis and t-pattern analysis to detect significantly recurring patterns of self-directed speech and nonverbal behavior that either are self-regulatory or show a failure of self-regulation. Furthermore, they argue that the analysis of these co-occurrences is required in order to establish the functions of self-directed speech and to determine in what ways these might be self-regulatory. They provide illustrative analyses of the data from a study comparing the patterns of self-directed speech use during a planning task in typically developing children and matched peers with specific language impairment (SLI). The results are presented in a manner that includes a focus on the advantages and disadvantages across the three methods. For example, the results obtained from t-pattern analysis reveal qualitative differences between these two groups of children in their use of self-directed speech that were not detected by the other two methods.

Kuvalja et al.’s paper is an excellent example of the dire need to extend current methods typically used to analyze process data. A major highlight of their paper is the comparison of multiple analytical approaches used to measure temporally unfolding processes related to self-regulation. They do an exceptional job of justifying the theoretical model and processes of interest (i.e., self-directed speech and nonverbal behavior) in young children during a planning task. The presentation of their coding scheme is outstanding, and the presentation, rationale, and justification for the use of the three methods are clear and well-articulated. Similarly, the comparison of the results across the three methods is particularly impressive.

In addition to the strengths, Kuvalja et al.’s paper (this issue) also raises some issues that need to be addressed by future researchers. One major issue lies in understanding the algorithms embedded in the commercially available software (e.g., Noldus Technologies THEME) used by researchers to analyze process data. It is rarely the case that the software (e.g., as in Kuvalja’s study) embodies the theoretical assumptions of self-regulated learning. For example, is a dyad (of events) that is most reflective of the temporal events related to planning when the t-pattern algorithm searches for a significant temporal relationship between a pair of event types? If all the analyses are performed in a bottom-up approach, then do they all make sense from theoretical, contextual, task, and individual perspectives? How does the software handle contextual issues that are pertinent to understand the patterns? If the time interval length is assumed to be invariant, then how does this assumption impact the results and their interpretations? Does the algorithm also detect and delete duplicate or incomplete versions of other detected patterns? Why is this necessary, and how does this decision influence the results and our understanding of the underlying verbal and behavioral processes?

Several other issues raise questions that should be addressed in future research. More specifically, how much planning does the task actually require? It seems as though the type of planning expected of the children is quite different from what is commonly expected of planning (e.g., writing an essay, solving a word math problem). Also, are six-year-olds capable of planning? The sample is quite small and so are the coded verbal and nonverbal behaviors that were collected from young children in both groups (see Table 3 in Kuvalja et al. this issue). The authors’ choice to code the verbal and nonverbal data separately is interesting, and one might wonder if contextual cues were missed because the data streams were not coded concurrently. The authors also used a particular time window in which to analyze data. This is a recurring problem in the field as researchers attempt to determine the time window needed to analyze events. This issue is further compounded by other critical issues, such as the context, task, age of learners, type of process data, and sampling rate. In sum, Kuvalja and colleagues’ study represents an impressive comparison of three methods, each with strengths and weaknesses, that extends beyond the typical frequency count data analyses of published studies in the field.

Malmberg et al. (this issue) examine what types of learning patterns and strategies elementary school students use to carry out ill- and well-structured tasks. In particular, they investigate which learning patterns emerge with respect to students’ task solutions and when. Twelve elementary school students participated in two science study lessons where they were asked to solve well- and ill-structured tasks with the gStudy learning environment, which is designed to support strategic learning. The system collected log-file traces to investigate how the conditions of task types might affect strategic learning. Methodologically, students’ task solutions were rated according to three categories (i.e., “on track”, “off track” and “partial solution”), followed by analyses of the learning patterns. They investigated learning strategies that emerged throughout these tasks, and subsequently, they used detailed cross-case analysis to explore, in depth, how and when these learning patterns were used with respect to the students’ task solutions. Overall, the results show that young students could provide in-depth task solutions but also adapt to the task complexity. However, the students illustrated the same types of learning patterns across the two tasks.

The Malmberg et al. study is important for several reasons. First, it deals with issues that will lead to advancing the field’s understanding of how elementary school students’ strategic learning unfolds to meet the demands of different types of tasks. As such, they tackle fundamental questions such as the following. What types of task solutions might elementary school students have when studying on the basis of different task types? What types of learning patterns (expressed as learning strategies) do elementary school students use to carry out those tasks? How do these learning patterns emerge across these tasks with respect to task solutions of the students, and what is the quality of their strategy use in ill- and well-structured tasks? Another major advantage is the use of computer-generated log-files to capture the use of learning strategies during studying. A second major issue addressed by Malmberg et al. is the focus on how task types influence strategy use, since an underlying assumption is that students should use different strategies to successfully complete each task type. Conducting a multiday experiment also offers an opportunity to examine both quantitative and qualitative changes in strategy use. The provision of cognitive tools embedded in gStudy offered the opportunity to study the impact of static scaffolds on strategy use during science learning. The use of state transition matrices to analyze dyadic SRL events (e.g., planning → monitoring) is not new and is a standard approach to examining the deployment of SRL processes (e.g., Johnson et al. 2011). Lastly, the results are presented as descriptive statistics, simple state transition matrices (called learning patterns), and in-depth qualitative descriptions of strategy use.

Despite the contributions of the Malmberg et al. study, several critical issues have been raised by this study that are noteworthy and should be addressed by future research. For example, fundamental issues related to the comparability of the tasks need to be addressed. Also, how does the use of a small sample size influence the analyses, results, and implications for instruction? Is the use of a single source of process data (i.e., log-files) to measure strategy use sufficient to reveal learning strategies? How did the lack of counterbalancing impact the results? What are the theoretical and analytical limitations of using a transition matrix that focuses on dyadic codes used to examine strategies? In sum, Malmberg and colleagues’ study represents an outstanding approach to studying complex issues related to young children, task types, and analyses of strategy use that combines quantitative and qualitative analyses of self-regulated learning.

The study by Molenaar and Chiu (this issue) examines the sequences of regulatory sequences of elementary school children’s collaborative learning. They specifically tested whether sequences of cognitive, metacognitive, and relational activities affected subsequent cognition. Eighteen triads were scaffolded by an avatar as the students wrote a report about a foreign country, which generated 51,338 turns. The authors engaged in an exploration of the turns (i.e., sequences of talk) by using statistical discourse analysis (SDA), which revealed some very interesting patterns including the following: (1) after low cognition, high cognition, planning, or evaluation, both low and high cognition were more likely to occur; (2) after monitoring or positive relational activities (e.g., confirm, engage), low cognition was more likely to occur; and (3) after a denial, high cognition was less likely to occur. Overall, their results suggest that metacognitive planning organizes subsequent cognitive activities and facilitates the transition between acquisition of knowledge whereas relational activities help enact them.

Their study contributes to the literature in many significant ways. For example, their results can influence microtemporal theories of social regulation and shared knowledge construction. In addition, their description and elaboration of the macro- and micro-level regulatory processes based on existing theories of SRL (e.g., Winne and Hadwin 2008) and attempts to incorporate contemporary models of socially-shared regulated learning (Hadwin et al. 2011) are important due to the nature of the collaborative task. The research questions posed by Molenaar and Chiu (i.e., What are the sequential relationships among metacognitive, cognitive, and social relational activities during collaborative learning? Does scaffolding affect these relationships?) are of key importance in advancing the field beyond the use of overly simplistic treatment of process data that uses sequences to understand the cyclical nature of SRL. The Ontdeknet system used by the students enabled the researchers to examine different sequences of regulatory activity since it allowed them to collaborate with an expert (embedded in the virtual system) and with one another face-to-face, in small groups working together. Lastly, the use of SDA is a major advantage to traditional analytical tools currently used to infer SRL trace data based on sequence patterns of verbal data. In sum, Molenaar and Chiu’s study provides an exploration of how sequences of students’ cognitive, metacognitive, and relational activities affect the likelihoods of subsequent lower versus higher cognitive activities during collaborative learning, how these relationships differ across time, and whether scaffolding affects these relationships.

Their study highlights some issues that need further elaboration and discussion. For example, additional information is needed to understand how the computer environment analyzed student attention, focus, behavior, and progress on the task, including algorithms used by the system to determine the nature, content, and timing of the scaffolds. The dichotomization of cognitive processes (i.e., low cognition vs. high cognition) raises interesting issues related to the nature of the SRL processes, amalgamation of various SRL processes into a single category, influence on results emanating from exploratory methods, and challenges in deriving instructional implications. Similarly, certain decisions regarding the classification and categorization of certain SRL processes (e.g., what constitutes a metacognitive activity?) resemble current debates in the field. For example, what is the difference between a metacognitive activity and a metacognitive process? Why is planning considered a metacognitive activity? Why are monitoring and control activities considered metacognitive activities? Lastly, this study does not connect the multilevel analyses of the local regulatory process with overall learning gains.

The fourth study presented in this special issue was conducted by Bannert et al. (this issue), who use process mining techniques to analyze individual regulation in 38 college students who used a hypermedia system to learn about concepts and principles of operant conditioning while providing think-alouds. Bannert et al. hypothesized that successful students would perform regulatory activities such as analyzing, planning, monitoring, and evaluating cognitive and motivational aspects during learning, not only with a higher frequency than less successful learners, but also in a different order. As such they demonstrated how various methods developed in process mining research can be applied to identify process patterns in SRL events as captured in verbal protocols.

Again, this study by Bannert et al. highlights the value of extending current methods used to analyze trace data from participants’ verbal data by using exploratory analyses, such as process mining techniques. They also provide a well-specified coding scheme by not only emphasizing cognitive and metacognitive processes, but also including motivation (albeit one code only). In addition to detecting differences in frequencies of SRL events, they also uncovered the temporal patterns in self-regulated learning of the most successful and least successful students. For example, compared to less successful students, successful students showed more orientation and planning processes before they processed the information to be learned. During reading, they elaborated on the information more deeply, constantly monitored different learning events, and performed evaluation activities. From an analytical perspective, Bannert and colleagues have demonstrated that process mining can potentially contribute to current methods of analyzing verbal data by using other algorithms for sequence data and temporal data than solely those applied in this study.

Similar to other studies, the Bannert et al. study raises several critical issues. Similar to other studies in this special issue, there are concerns of sample size, taxonomy of the coding scheme to code SRL processes, implicit assumptions underlying the commercial software’s algorithms, researchers’ explicit assumptions and decisions in the treatment of the process data, potential confounds in not controlling participants’ (i.e., Psychology and Education college students) prior knowledge of the topic included in the hypermedia task (i.e., concepts and principles of operant conditioning), and the nature of the SRL processes captured given that the content focused on declarative knowledge, which may have led participants to either spend most of their time reading the content (associated with low prior knowledge) or engage in an inordinate amount of metacognitive monitoring (as they recalled prior knowledge from long-term memory and determined whether they were familiar enough or needed to read the content, based on their prior knowledge). Lastly, the duration of each SRL process may also be of value in determining not only the quantity, but also the quality of the temporally unfolding SRL processes (Azevedo et al. 2010).

The last study reported in this special issue is by Kinnebrew et al. (this issue), who hypothesized that metacognition and self-regulation are important components for developing effective learning in the classroom, but novice learners often lack effective metacognitive and self-regulatory skills. Similar to emerging evidence in the field, they proposed that metacognitive processes could be developed through practice and appropriate scaffolding. In their study, 73 seventh-grade students used Betty’s Brain, an open-ended computer-based learning environment, designed to help students practice their cognitive skills and develop metacognitive strategies as they learn science topics (e.g., climate change). They analyzed students’ activity sequences that compared different categories of adaptive scaffolding. Their results showed that it is possible to detect and interpret students’ learning strategies as they worked in the Betty’s Brain environment.

The paper by Kinnebrew and colleagues is important for several reasons. First, it focuses on learning issues related to adolescents’ understanding of science with an advanced learning technology. Second, though deeply rooted in the original conceptualizations of metacognition, it proposes a cognitive and metacognitive task model that is embodied in the Betty’s Brain system. In addition, this task model decomposes knowledge construction activities and aligns them with cognitive and metacognitive processes and corresponding scaffolding methods (see Fig. 2 in their article). Third, their analytical techniques for measuring students’ cognitive and metacognitive processes extend their previous work on using sequence mining methods to discover students’ frequently used behavior patterns. This is accomplished by using the following: (1) a cognitive and metacognitive task model to interpret students’ behavior patterns, (2) visualization methods to study the temporal evolution of the discovered behavior patterns, and (3) clustering techniques to discover the temporal evolution characteristics of the different categories of behaviors.

Finally, and similar to other studies presented in this special issue, the Kinnebrew et al. study raises several critical issues—for example, small sample size, taxonomy of the coding scheme to code cognitive and metacognitive SRL processes, the need to clarify the amalgamation of underlying theoretical assumptions regarding SRL with the assumptions embedded in the statistical techniques used by the team to analyze the process data, inferences regarding the actual deployment of cognitive and metacognitive processes based on activities performed in the Betty’s Brain system (without additional process data—e.g., concurrent think-alouds, etc.), and lack of attention to the importance of including the duration of each cognitive and metacognitive process in understanding the nature and quality of the temporally unfolding SRL processes.

In sum, these five studies have provided very thought-provoking ideas, based on their use of a variety of analytical techniques. For each study, I summarized the major findings, highlighted the strengths, and raised some questions that need further elaboration. In the following section I will raise some theoretical, conceptual, methodological, and instructional issues that are based on these five articles.

Issues, Challenges, and Questions: Implications for Future Research

This special issue suggests three implications for improving our conceptual understanding of self- and socially-regulated learning. Each implication is summarized below, along with some key questions that should provide useful direction and guidance for future research in the area.

Conceptual and Theoretical Issues

The articles presented in this special issue raise several conceptual and theoretical issues regarding the nature and role of metacognition and self-regulation. There is a need for researchers to clearly articulate the theoretical framework, model, or theory being used in their studies. Each study in this special issue exemplifies different levels of adherence to some specific model of SRL. For example, is one using Winne and Hadwin’s (2008) information processing model, another using Zimmerman and Schunk’s (2011) sociocognitive model of SRL, etc.? It is imperative that we adhere to a specific model that we can use to generate hypotheses and make assumptions regarding the role, timing, duration, and quality of specific processes, mechanisms, and constructs. Therefore, there are several issues that need to be addressed, including the following: (1) What self-regulatory strategies are students knowledgeable about? How much practice have they had in using them? And, are they successful in using them? Do they know if they use them successfully? Can we expect young students to be able to dynamically and accurately monitor their cognitive and metacognitive processes? (2) How familiar are students with the tasks they are being asked to complete? Are they familiar with the various aspects of the context and learning system they are being asked to use? (3) What are students’ levels of prior knowledge? How do individual differences impact their knowledge and use of SRL processes? What impact will prior knowledge have on a learner’s ability to self-regulate? (4) Do students have the necessary declarative, procedural, and conditional metacognitive knowledge and regulatory skills essential to regulate their learning? Do young students have the ability and sophistication to verbally express utterances that represent (and are coded as) SRL processes and can be both coded by researchers and externalized to others during collaborative tasks that involve negotiation, shared task understanding, etc.? These are just some of the important issues that future research should address, based on an analysis of the studies reported in this special issue.

These issues become even more complex when dealing with socially-regulated learning, as recently exemplified by emerging conceptions of self-regulation, co-regulation, and socially- shared regulation of learning (Hadwin et al. 2011; Järvelä and Hadwin 2013). As exemplified in the studies included in this special issue, there is relatively little research about how groups and individuals in groups engage, sustain, support, and productively regulate collaborative social processes. Accordingly, these studies represent an initial examination of the role of self-regulatory and other regulatory processes in various learning contexts, agents (both human and artificial), and tools that facilitate or impede individual and shared regulation of learning. An exciting prospect is to use computer-based learning environments to successfully support regulation in individual learning and also leverage such environments for collaborative task contexts to examine the role of socially-regulated learning.

When we oscillate between self-regulated and other regulated learning (depending on the context, research question, hypotheses, etc.), we face several imminent challenges. Some of the major conceptual and theoretical questions include the following: (1) What are the defining criteria that differentiate self-regulated learning (SRL), co-regulated learning (CoRL), externally regulated learning (ERL), and socially-shared regulated learning (SSRL)? (2) How does the contextually bound nature of SRL impact researchers’ ability to clearly and consistently define and operationalize all the constructs, mechanisms, and processes associated with SRL, CoRL, ERL, and SSRL? (3) Are there clear boundaries between SRL and all other types of regulated learning (i.e., CoRL, ERL, and SSRL)? If so, what are they? Or, do we agree on a few defining criteria (e.g., regulated learning is intentional, goal-directed, and metacognitive in nature; leaner have the potential to regulate behavior, cognition, motivation, and affect) while other criteria are contextually bound (e.g., regulated learning is social; artificial pedagogical agents are limited based on their design to detect, track, model, and foster students’ SRL)? (4) How can we extend our current frameworks (of SRL, CoRL, ERL, and SSRL) beyond their descriptive nature so that we can improve our ability to make predictions about regulatory behaviors?

Methodological and Analytical Issues

The second major issue stemming from the studies has to do with the differences among the coding schemes used to code the process data, statistical and theoretical assumptions regarding the data collected and their treatment, and inferences drawn from the sequential and temporal unfolding data about self- and socially-regulated learning (Azevedo 2009). The studies in this special issue employ a variety of exploratory analytical tools that forge new directions in the field. However, there are some major issues that need to be addressed as we embark on using contemporary analytical software and tools to detect, track, model, and examine the temporally unfolding SRL, CoRL, ERL, and SSRL processes during various learning activities, across a variety of learning contexts.

There are several issues that should be raised with coding schemes. For example, what is the “right” level of granularity for one to understand the role, emergence, development, etc. of a particular cognitive and metacognitive process during task performance, learning session, etc. while interacting alone or with a computer-based learning environment, or in a group with a computer-based learning environment? What is the conceptual foundation for clumping certain cognitive and metacognitive processes such as monitoring and control into the same category? If the purpose of one’s research is to capture and measure the real-time enactment processes, then what should the sampling frequency and timing be between observations? Is the use of inferential statistics justified when analyzing ipsative data? How does the integration of other and shared regulation complicate these analytical and statistical issues? How many data channels (e.g., concurrent think-alouds alone vs. concurrent think-alouds + eye-tracking + log-files + screen capture software of all human-machine interactions) are necessary in order for one to make valid and reliable inferences regarding the nature of temporally unfolding regulatory processes?

The studies in this special issue use a variety of analytical tools that have strengths and weaknesses, and also vary along several key dimensions (e.g., statistical assumptions, number of parameters, accuracy, fit and alignment with theoretical assumptions, ease of use and interpretation, etc.) to analyze self- and socially-regulated learning. We can ask how far the studies in this special issue have advanced the current literature and methods used to detect, track, model, and infer SRL processes. Despite their exploratory nature, I believe we have learned quite a bit from these studies. As such, we have some major issues and challenges as we forge a new direction in the study of SRL processes. For example, do we have adequate analytical and statistical methods to handle the complex nature of these processes during learning with others? Another issue related to level of granularity is the issue of temporal sequencing regarding the deployment of self- and socially-regulatory processes during learning. This is an extremely important issue that has to do with extending the timeframe for learning tasks so that we can capture and investigate the complexity of the underlying metacognitive and self-regulatory processes. How do these processes relate to learning outcomes in complex and ill-structured tasks? In addition, how can these exploratory analytical tools accommodate additional trace data (e.g., video of the context, screen capture of human-machine interactions, nonverbal expressions, gestures, etc.) to provide contextual information to enhance the accuracy of results? By accommodating additional trace data, researchers would have to temporally align data (e.g., electrodermal activity 8 Hz with eye-tracking data at 120Hz) at different sampling rates in order to analyze SRL process data using exploratory analytical techniques.

Another issue not addressed by the current studies that should be emphasized in future research is the inclusion of the time duration and valence associated with SRL processes, since they are integral to examining the quality and nature of the temporally unfolding processes (see Azevedo et al. 2010, 2013). For example, if the enactment of a learning strategy, such as taking notes, takes over 1 min to complete (followed by another SRL process) while a metacognitive judgement, such as a content evaluation related to the relevancy of multimedia materials, may only last 2 s (followed by another SRL process), then this potentially creates issues on unbalanced code density and inaccurate inferences about the processes, if not taken into consideration when analyzing process data. Similarly, valence has emerged as another critical issue in examining SRL processes (see Azevedo et al. 2010; Azevedo et al. 2011a, b; Greene and Azevedo 2009, 2010). For example, certain metacognitive monitoring and regulatory processes, such as judgment of learning (e.g., JOL–, “I do not understand this paragraph,” and JOL+, “I do understand this paragraph”) need to be differentiated by adding valence. Valence has also been used for learning strategies to indicate correct versus incorrect summaries of the instructional content. Valence allows researchers to examine the microlevel feedback loops between metacognitive monitoring and control, and their use can be compared to theoretical assumptions. For example, it can be theoretically postulated that rereading would follow from a negative JOL since it would be adaptive for students to reread the same text after indicating that they do not understand it. By contrast, one can postulate that a number of regulatory processes can follow from a positive JOL utterance indicating that they understood the paragraph. More specifically, they can set a new goal, continue reading because they are acquiring knowledge about the topic, coordinate between external representations of information, etc.

Instructional Issues

The study of sequential and temporal self- and socially-regulated learning occurs in some learning contexts that may vary along several dimensions (see Azevedo and Aleven 2013a). For example, the set of studies included in this special issue illustrated these dimensions, such as the presence of human (e.g., peers, researchers) or artificial agents (e.g., pedagogical agent), different feedback systems, the nature and complexity of and familiarity with the task, relevant prior knowledge, the age of participants, the time allotted to complete the task, affordances provided that offered participants the ability to express agency and SRL processes, etc. Thus, offering a nice range of instructional contexts is imperative to study self- and socially-regulated learning (Azevedo and Aleven 2013b).

Future work in the area needs to address several outstanding issues. First, issues related to the learning context need to be clearly described and accounted for by the learner and the advanced learning technology (ALT; e.g., Kinnebrew et al.’s Betty’s Brain). In this category, several variables of interest that need to be addressed include the following: (1) What are the constituents of the learning context (e.g., human agents, artificial agents, nature, characteristics, and interdependence of the personal, physical, embodied, and virtual space (s))? (2) What are the learning goal (s) (e.g., provision of a challenging learning goal (s), self- or other-generated goal (s), duration allocated to completing the learning goal (s))? (3) What is the accessibility of instructional resources (e.g., accessibility to these resources to facilitate goal attainment, engaging in help-seeking behavior and scaffolding while consulting resources)? (4) What are the dynamic interactions between the learner (s) and other external/in-system regulating agents (e.g., pedagogical agents’ role (s), levels of interaction, types and timing of scaffolding and feedback, and embodiment of modeling and scaffolding and fading metaphor behaviors)? (5) What is the role of assessment in enhancing performance, learning, understanding, and problem solving?

In addition, future research should also address the following questions. How do different instructional conditions, learning environments, and contexts impact learners’ ability to regulate aspects of their learning (e.g., cognitive, metacognitive, motivational, and affective)? Can an activity that was initially designed to foster socially-regulated learning (e.g., at the beginning of a task through the use of pedagogical agents) lead learners to decrease their reliance on the agents and autonomously foster their own SRL? If so, when did the change (s) occur, how did it/they occur, what/whom (e.g., pedagogical agents) facilitated that change, and what is the evidence that the change occurred? In addition, what evidence is there to indicate when, how, why, and what changes occurred? Will the data be consistent for all learners even if they are randomly assigned to different instructional conditions? What would this tell us about SRL and whether it is contextually bound?

Other issues and open questions related specifically to those using ALTs (e.g., in the Molenaar & Chiu, Malmberg et al., and Kinebrew et al. studies) include the following: (1) Will the ALTs offer opportunities for learning about these complex processes? (2) Will the environment provide opportunities for students to practice and receive feedback about these processes? (3) What are students’ self-efficacy, interest, task value, and goal orientations, which may influence their ability to self-regulate? (4) Are students able to monitor and regulate their emotional states during learning? If they are not able to, then should we use artificial agents to train learners to accurately detect, monitor, and regulate emotions as part of the overall learning process? How do we design ALTs that are sensitive to fluctuations in learners’ motivational and affective states? (5) What are the types of interactivity between the learner and ALT (and other contextually embedded external agents)? Are there different levels of learner control? Is the system purely learner-controlled and therefore relies on the learner’s ability to self-regulate, or is the system adaptive in externally regulating and supporting students’ self-regulated learning through the use of complex AI algorithms that provide SRL scaffolding and feedback? (6) What types of scaffolding exist (e.g., what is the role of externally regulating agents) and do they play different roles (e.g., scaffolding, modeling, etc.)? Is their role to monitor or model students’ emerging understanding, facilitate knowledge acquisition, provide feedback, scaffold learning, etc.? Do the levels of scaffolding remain constant during learning, fade over time, or fluctuate during learning? When do these human and agents intervene? How do they demonstrate their interventions (e.g., conversation, gesturing, facial expressions)? Lastly, how can we use emerging technologies (e.g., augmented reality and brain-computer interface) to enhance learners’ ability to understand, acquire, internalize, share, use, and transfer self- and socially-shared monitoring and regulatory knowledge and skills (Azevedo 2014; Biswas et al. 2010)? In sum, these are just some of the most relevant issues that need to be addressed by interdisciplinary researchers as we consider the future design and use of ALTs for studying and fostering self-regulated learning, coregulated learning, externally regulated learning, and socially-shared regulated learning.

Conclusion

The research presented in this special issue highlights several key theoretical, conceptual, methodological, analytical, and instructional issues related to the examination of sequential and temporal characteristics of self- and socially-regulated learning. The studies employ new exploratory techniques for examining SRL. This special issue further contributes to the emerging evidence that self-report measures are inadequate in measuring SRL. Despite the numerous conceptual theoretical, methodological, and analytical issues raised by the authors of this special issue and in my commentary, I truly believe that focusing on process data will lead to advances in theory, methods, analytical techniques, and ultimately instructional recommendations. In summary, the analytical methods illustrated in this special issue are exciting and contribute immensely to the field. Each approach exemplified by the authors shows different ways of analyzing the data and demonstrates the potential of each method to significantly augment our current understanding of the nature of SRL. In their own special way, each method also serves to provoke researchers to seriously consider the assumptions, methods, and analytical techniques in their own work. In sum, the future is bright and I am honored to be part of this community!