The purpose of this work was to evaluate whether the ICAP framework could be extended the metacognitive study literature by examining students’ self-reported study practices in an authentic classroom context. When integrating these two strands, two aspects of studying emerged: how one studied and time management. Interestingly, each strand contributed new strategies that the other had not included (e.g., metacognitive study strategies included time-management strategies, and ICAP included monitoring and regulation, among others). To assess these aspects of studying, students responded to closed-ended (Likert-scale, forced-choice) and open-ended questions before three noncumulative exams. Each type of measure had different affordances. The use of the open-ended question provided a census about which study strategies were most salient to students, as their responses were self-generated. In contrast, the use of the Likert-scale and forced-choice questions required students to make judgments about the described strategies. Across the measures, students reported using several types of strategies. In the next few sections, we discuss the study and time-management strategies in terms of their reported use, their relation to each other (only for study strategies) and their relation to exam performance. Then we discuss the relations across the measures and some limitations and directions for future work.
Study strategies
The study strategies referred to the learning activities students engaged in to prepare for the exam. These included rewriting, highlighting, summarizing, generating examples, self-explaining, analogically comparing, quizzing/self-testing, metacognitive monitoring, and regulation. In general, the open-ended question revealed that students did not describe many study strategies. In fact, the most common strategy was monitoring, which only half the students reported using for any given exam. This strategy also had the largest variation in how often students reported using it in their open-ended responses. Otherwise, students were consistent in that they did not mention the other strategies often. In contrast, when prompted by the Likert-scale measures, students reported using a moderate amount of constructive strategies. One reason for this discrepancy between student responses for the prompted and open-ended questions is that students might not be aware of the strategies that they use until prompted about them. Another reason for the discrepancy might be that when students are prompted with a strategy, it leads them to inflate their responses (see the Relation of the Measures section).
Although there was variation in the reports of student study strategies, we were able to categorize the self-reported strategies in alignment with the ICAP framework, representing two categories: active and constructive (Chi, 2009; Chi & Wylie, 2014). To further unpack the ICAP framework with self-reports, we also examined whether constructive and active strategies were related to each other. Some of the active and constructive strategies were positively related to each other, including creating examples and summarizing, and monitoring and highlighting. These results suggest that active and constructive strategies can work together in which the active strategies might help facilitate the use of constructive strategies. For example, in some of the responses, students would say they highlighted what they did not know, suggesting that when used together the active strategies can be used as a marker or referent for the constructive strategy to build upon. In this respect, it also supports the hierarchy assumption of the ICAP framework in which the higher categories subsume the processes of the lower.
We also examined the relation between the strategies within the same category: active or constructive. For the open-ended strategies, we found interesting patterns in which some of the active strategies were related to each other (rewriting and highlighting), but that these were not related to summarizing. Unlike the active strategies, each of the constructive strategies was positively related to another constructive strategy, with self-explanation being positively related to the most constructive strategies (creating examples, quizzing, comparison, and monitoring). These patterns were also supported in the Likert-scale measures in which all of the constructive strategies were positively related to each other. These findings suggest that the use of a constructive strategy tends to support the use of other constructive strategies, at least when students report using them. This result is also consistent with the ICAP hypothesis that constructive strategies share inference making processes and may be interrelated. For example, quizzing creates an opportunity to monitor one’s knowledge and monitoring may facilitate self-explanation.
It is also important to note that there might be individual differences at play, such that students’ motivational beliefs might be driving them to use constructive strategies. For example, if students had the goal to understand the material completely (a mastery-approach goal), we might predict that they will discover and use constructive strategies to accomplish that goal (Nokes-Malach & Mestre, 2013). They might also be more likely to adopt other constructive strategies as opposed to active strategies that do not lead to conceptual understanding.
Moving beyond the relations between the strategies, we also examined whether these strategies were related to exam performance to further test the ICAP framework. The results were consistent the hypothesis that constructive strategies would have more positive relations to exam performance than the active strategies. Prior studies testing the hypothesis that constructive activities lead to better learning and performance than active have focused on observable learning activities. In the current work, we found similar results for students’ self-reported study strategies. This adds to the growing evidence base for the positive relations between constructive study strategies and learning and performance outcomes across observational, experimental, and now student self-report data.
Time-management strategies
Additionally, we examined students’ time-management strategies, which were unique to the metacognitive study strategy literature. In students’ open-ended responses, time was not a salient aspect of their study descriptions. This finding suggests that students rarely think of time as being one way to study, which is consistent with Kornell and Bjork’s (2007) interpretations that students do not think spacing one’s study time is a strategy that helps memory. It could also be the case that answering the closed-questions about timing before the open-ended question resulted in some students not including those factors in their open-ended response because they thought that information was already accounted for in the prior responses. Alternatively, one might have expected that by bringing students’ attention to those study features would have primed them to include those factors in their open-ended statements. Across all time-management variables, there was no relation to exam performance, which might be because time was not a salient factor in students’ minds when it came to their studying practices and/or because time is difficult to estimate as evident with the large standard deviations for the amount of time studying and the percentage of time dedicated to different resources. More work is necessary to understand how students perceive their study time and the structure of their learning activities.
Relation of the measures: The importance of question type
The type of questions that assess student strategies is important to consider. Do similar measures align and do they have implications for how those responses relate to student learning? This work revealed that when framing the question to a specific exam, the two types of questions sometimes align, but that this depended on the construct. Students’ open-ended reports on self-explanation and analogical comparison—concrete and explicit strategies—were consistently related to their Likert-scale items. However, the open-ended monitoring statement was only related to the monitoring Likert-scale measure for the first and third exam. This relation disappeared at Exam 2. This result is not very surprising as metacognitive processes have a history of being difficult to adequately capture with Likert-scale measures (Winne & Perry, 2000; Zepeda, 2016). Another reason that this relation might have been weak is that, in hindsight, the Likert scales captured whether students thought that they were able to monitor, not specifically whether they did monitor.
Interestingly, the variation in measurement also revealed differences in their relation to exam performance. Self-explanation and analogical comparison were positively related to exam performance when they were measured by a Likert scale, but not by the open-ended question. One likely explanation is that students do not realize that self-explanation and analogical comparison are study strategies and thus are less likely to report these strategies, limiting the opportunity for them to be related to exam performance. Another possibility is that the Likert-scale measures prompt specific (and better) ways of using self-explanation and analogical comparison, which do not capture all the ways students engage and describe those strategies. For example, the self-explanation and analogical comparison items contained aspects of monitoring (which was positively related to exam performance for both types of measures), such that they required students to be able to know which parts of the material was difficult for them (e.g., “explain difficult concepts” and “If I don’t understand something”). Perhaps if the open-ended responses were coded in better alignment to the Likert-scale items, they would have had more converging relations.
The use of the open-ended question (although tedious to code) provides insight into the strategies that are at the forefront of students studying habits and reveal which strategies are overt and valued by students. A more specific frame to the question (a particular context vs. general) can also remove some of the difficulty in retrieving which strategies they used. Both of these adjustments might also help to alleviate the responses that could be inflated when general strategy prompts are provided. For comparison, in this work, students reported quizzing or testing themselves slightly more (an average of 19% across the exams) than the one prior study that used a general open-ended question (10.7%, Karpicke et al., 2009). These percentages are much lower in comparison to studies that specifically asked students if they generally used practice problems or tested themselves (71%, Hartwig & Dunlosky, 2012; 72%, Morehead et al., 2016).
There were also other differences between prior work and the research presented here. These differences included rewriting notes (averaged across exams 16% vs. 29.9%, Karpicke et al., 2009; 33%, Hartwig & Dunlosky, 2012; 33%, Morehead et al., 2016), and highlighting (averaged across exams 13% vs. 6.2%, Karpicke et al., 2009; 22%, Hartwig & Dunlosky, 2012; 53%, Morehead et al., 2016). There was a similar response for creating examples (averaged across exams 4% vs. 4.5%; Karpicke et al., 2009). From these comparisons, it appears that the contextual framing of the question has implications for the frequency in which a strategy is reported, which can also have implications for the likelihood that it is related to performance. When assessing student study strategies, it may benefit researchers to further evaluate the nature of the questions they pose.
Limitations and future directions
Students who more frequently reported using (open-ended) or endorsing (Likert scale) constructive strategies were also students that performed well. Although these results are consistent with prior experimental work, we cannot conclude that they were causal in this study. It is possible that other unmeasured correlated variables were driving these effects (e.g., motivation; see Zepeda, Martin, & Butler, in press, for a commentary). Future work should further examine the relations between these study strategies, motivation, and learning outcomes. The current approach and methodology provide a set of measures that could be used in future work to examine whether a given instructional intervention changes students’ self-reported study strategies as well as their learning and performance outcomes.
The current work also focused on how students reported behaving outside of the class (their study strategies), but what was not clear was how the in-class demands interacted or affected how students behaved outside of the class. Although this work evaluated two semesters of the course, we did not examine whether the in-class demands (warm-up quizzes versus end-of-lecture quizzes) affected the types of strategies students reported. The types of activities and resources a course provides may result in differential learning outcomes. For instance, a classroom that provides resources that students can use to test themselves easily might result in more students reporting that they quizzed themselves.
Asking students to report on how they studied is a metacognitive process. Students who respond to questions about their studying have to both assess their awareness of their strategies and reflect on their study strategies. It is possible that asking students about their studying resulted in them engaging in more self-regulatory processes. For example, the questions might have prompted students to be more evaluative about the ways in which they studied, resulting in changes in their study strategies and subsequent learning. Future work could test whether responding to questions about studying throughout a course affects subsequent learning.
Another limitation of the current work is the retrospective nature of the self-reports. Whether or not the students were accurate in their retrospective judgments of these strategies is still open for investigation. The current approach also does not capture the amount of time students spent using different strategies. An ecological momentary assessment would be a nice compliment to this work as it would also reveal when students use these study strategies and can be used to obtain a better estimate of how much time they spent using each type (Shiffman, Stone, & Hufford, 2008).
The type of knowledge that is covered in specific courses might also have implications for the strategies students report using and their relations to exam performance (Chi & Wylie, 2014; Wolters & Pintrich, 1998). For example, the course and exams in this study involved factual, conceptual, and applied knowledge in which the students had to know the topics covered, their conceptual underpinnings, and apply these concepts to different situations. Critically, the course went beyond covering only factual knowledge and did not require students to mathematically problem-solve. A course only emphasizing factual knowledge or requiring procedural knowledge with mathematical procedures may have different types of student-reported strategies and relations to exam performance. A productive line of research could examine the types of knowledge that self-reported strategies are related to across different domains. Courses that emphasize the link between procedural and conceptual knowledge or factual and conceptual knowledge and provide opportunities to apply their knowledge to new situations might have different relations between the study strategies and performance outcomes in comparison to courses that only emphasize the procedural, factual, or conceptual aspects.
Conclusions
Integrating the metacognitive study strategy and ICAP literatures provided several affordances in examining students’ study practices such as categorizing study strategies based on theory (i.e., constructive and active categories via ICAP) and broadening the strategies to include additional ones. In support of the ICAP framework, we found that constructive strategies had more positive relations with each other and exam performance in comparison to active strategies. In particular, monitoring strongly predicted performance and was positively related to many of the other strategies. These results, along with the theoretical tie between monitoring and many of the constructive strategies, suggest that it may be a powerful construct to independently measure and incorporate as a separate category into the ICAP framework. Importantly, this study revealed that students’ self-reported study strategies were predictive of learning and performance outcomes in theoretically consistent ways, providing support for researchers to use these self-reports as a measure in future study strategy experiments and interventions.