Prior failures, laboring in vain, and knowing when to give up: Incremental versus entity theories

Abstract

Against intuition, a set of “desirable difficulties” has been touted as a way in which to improve learning and lengthen retention. This includes, for instance, varying the conditions of learning to allow for more active, effortful, or challenging, contexts. In the current paper, we introduce data that show that, on the contrary, learning to know when to take the easy road may be crucial when it comes to avoiding “laboring in vain.” We presented participants with prior problems – either easy or difficult – followed by choices of selecting an easy or a difficult current problem. Our primary goal was to examine the notion that past failures (which are more likely on the difficult prior items) may be a basis for allowing learners to then choose the easy rather than the difficult current problem. In other words, if one has labored in vain already, the easier items may now be more desirable. In addition, we compare the selections that are made between incremental and entity perspectives, given their fundamentally opposing views on effort. Our results showed that, interestingly, incremental theorists, who generally are proponents of effort, were more likely to select the easy problems, but only when they had experienced failure on prior, and similar, difficult tasks. We interpret these data to suggest that those holding an incremental view may be more in tune with their past efforts, resulting in a Metacognition-by-Experience, or ME strategy, and also hint at its generalizability through cross-cultural comparisons.

Introduction

As research on metacognition continues to rise, the idea that better metacognition leads to more knowledge becomes easy to believe. After all, the basic definition of metacognition is “knowing about knowing.” In this paper, we seek to emphasize a view that is perhaps less intuitive: Good metacognition does not unconditionally mean more knowledge or better learning. On the contrary, good metacognition means that one is well aware of what one knows and what one does not know. Of course, if one understands that one does not know, appropriate study strategies can then ensue, leading to more knowledge after all (Aronson et al. 2002; Kornell and Son 2009; Metcalfe and Finn 2008). In the literature, many of these appropriate strategies have been driven by the notion of “desirable difficulties” (Bjork 1994), which posits that persisting through difficult and sometimes uncomfortable learning situations may be all for the best. However, we here highlight some data that indicate that when learning is judged as being too difficult to learn, good metacognition can mean knowing when to avoid laboring in vain as opposed to persisting. Indeed, the idea of desirable difficulty, while advantageous from the teacher’s perspective, could certainly be “undesired” by the learner (Yue et al. 2013). We further consider this important question here and ask: When, and for whom, might desirable difficulty be undesired?

How might one learn to avoid laboring in vain? We believe that the best, and maybe the most common way, is to learn from experiences with failure. Recent research has shown that learners seem to be in tune with feedback from performance on a test, as opposed to the learning process itself, such as improvement or decay processes (Bjork and Bjork 2011; Kornell and Hausman 2017; Metcalfe and Miele 2014; Metcalfe and Xu 2018). And earlier research has shown that people are often overconfident in their metacognitive judgments prior to being tested, but flip to being more underconfident about their learning after they have been exposed to a difficult test – known as Memory-For-Past-Test or MPT (Finn and Metcalfe 2007; Koriat 1997). In these experiments, people observe a drop in participants’ judgments of learning (JOLs) after failing to answer as many questions as they had expected on a previous test.

For the most part, in the metacognitive and learning literature, a message has been delivered, and it suggests using a “persevere, don’t give up” strategy. Bjork’s (1994) notable phrase, “desirable difficulties,” has travelled quickly through the scientific field and has made its way into education. (Bjork and Bjork 2011; Bjork 1994). Many students seem to be familiar with its meaning, knowing that even when learning is challenging or frustrating, it could mean that they are, actually, learning. Self-testing, for instance, has been shown to be consistently beneficial for learning, even though people may find it less comforting than a strategy such as passive reading (Roediger III et al. 2009, 2011). And on the whole, we would agree that putting in more effort than you might feel comfortable with is a good thing – any learner can increase their knowledge with effort. In fact, endorsing an entity theory, or believing that effort would not impact learning, may be quite harmful for how one decides to learn or what one thinks about their potential, as compared to someone who endorses an incremental theory (Miele et al. 2013).

Knowing what you do and do not know, however, is quite a complex process, and we also believe that each situation is subtly different, requiring an extremely flexible monitoring and control system that must constantly evolve. It is easy to remember instances, for example, when we have put in a great deal of effort, only to come up in vain. While in many circumstances, effort – the right kind – is bound to result in increases in competence, there may be times, say when time is limited, when it is best to “give up” in the sense of allocating one’s time on a more manageable task. Some data have shown that participants, in these situations, will, though, labor in vain (Mazzoni and Nelson 1995), while in other situations, will choose to allocate their efforts to relatively easier tasks (Son and Metcalfe 2000).

Knowing when to persist and when to give up may arguably be one of the most difficult metacognitive decisions to make, particularly since the future is uncertain (Son and Sethi 2006, 2010). The data on the Region-of-Proximal Learning, or the RPL paradigm (Metcalfe 2002; Metcalfe and Kornell 2003), have shown that it is beneficial for learners to allocate their study time to items that are intermediate in difficulty (as opposed to too easy or too difficult), but how does one know what is “intermediate?” It seems that, in most cases, the best way to gauge the possibility of successful learning is to give weight to past experiences, and to pay attention to feedback, before making an allocation decision on a current task. But even so, for any learner, during study, is it possible to determine whether that decision is the optimal one? If one does end up persisting on a fairly difficult task, but sees no obvious gains on the test, can we ever confirm that the effort will result in zero savings and only wasted time? If one does give up, choosing to invest in an easier task, can we confirm that they might not have succeeded on the more difficult task? The answers to these questions, while they cannot be certain, seem to be the basis for why the metacognitive process continues to be a work in progress for the individual learner.

Influence of experience in learning

In the current research, we investigated the choices that people make during learning – Will people select the easier tasks or the more difficult tasks? And we began with the premise that the learner’s metacognitive decisions – whether optimal or not – will be shaped by one’s past experiences. For example, if one were to be exposed to challenging tasks, such as a test of high-level difficulty, one is likely to be aware of potential failure, and choose to avoid laboring in vain on a current task that may be similarly difficult. Without such prior exposure, however, one might opt for the “don’t give up” strategy, investing their time on a difficult problem, potentially resulting in no obvious gains. Because of these differences in the recent past, one individual’s behavior is likely to differ from another’s, if in fact learners are taking the past into account. We here call this the metacognition-by-experience (ME) strategy.

Basis for the ME hypothesis

In addition to the MPT data, recent research suggests that experienced past failures during learning can affect the prediction of performance on current tasks (Bae 2016; Dunlosky and Matvey 2001; Hertzog et al. 2002; Koriat and Ackerman 2010). That is, regardless of actual current performance, given previous successes or failures, people are likely to feel differently about their current performance. In a study by Metcalfe et al. (1993), for instance, participants showed a tendency to (mistakenly) judge that a previously exposed word could be better memorized than a new word, despite not having been memorized in practice. Others have shown that order of previous tasks matters too – if previous experiences ended “on a high note” (with a higher likelihood of learning success), as opposed to on a low note (on a challenging task), people were likely to choose to participate in that task again (Finn 2010).

The degree of difficulty experienced on previous tasks will also affect the judged degree of learning on subsequent tasks. Dunlosky and Matvey (2001) showed that difficulty on previous tasks influenced people’s JOLs on current tasks. In their study, participants learned noun-noun word pairs and then judged how well they would be able to remember each pair on a later test. The word pairs varied in difficulty – Some were easy, since they were related (“desk-chair”), and others were difficult, being unrelated (“dog-spoon”). Their results were that when a difficult pair was presented after an easier pair, as compared to when presented after a difficult pair, JOLs for that current difficult task were significantly lower. In other words, even though there was no difference in the difficulty level of the current task (they were all of similar difficulty), and also no difference in later performance, experience with easier prior items – i.e. experience with more success than failure – led to a drop in confidence when suddenly faced with a current difficult pair.

Effort outlook

Beyond past experience with success and failure, the decision of choosing an easier or more challenging task may depend on one’s feeling towards their abilities or relationship with effort (Koriat et al. 2014). If one were the kind of person who felt that they had a sufficient amount of potential – however that may be measured – or that only a small degree of effort was needed to be exerted for success, then they might believe that success on a difficult task might be more probable (regardless of actual success). This belief may, in turn, play out in behaviors that look relatively more high risk, such as attempting a more challenging, as opposed to an easier, task. On the other hand, if one felt that ability could not be changed no matter how much effort one exerted, then attempting a difficult task might be asking for failure, efforts being only in vain. How effort plays out during learning decisions, however, may not be as simple as is represented in the prior two examples. If past experiences were to be considered by the learner, surely, there are a variety of different experiences regarding success/failure and high effort/low effort. In other words, in any one individual’s experiences, it probably isn’t likely that more effort always led to more success. Given this, how one decides to use past experiences to guide their ongoing task selections cannot be straightforward.

In an attempt to simplify our detailed hypotheses about participants in the current study, we considered how individual Theories of intelligence (TOI) come into play when making learning decisions (Dweck et al. 1995): Endorsing an “intelligence is fixed” (entity) versus “intelligence is malleable” (incremental) theory should impact learning in different ways. Recent studies have provided evidence that differences in TOI can influence the learner’s JOL status (Briñol et al. 2006; Labroo and Kim 2009; Miele et al. 2011, 2013). In general, entity theorists tend to focus on performance goals (goals of improving performance on future tests) and seek positive evaluations of their abilities (Dweck and Leggett 1988). In turn, they place less weight on effort, tending to believe that any exertion of effort is an indication of low ability (Rhodewalt 1994). Thus, in general, entity theorists would be likely to choose targets that are judged to be sufficiently achievable – i.e. the easier tasks. Incremental theorists, on the other hand, tend to pursue learning goals (goals of improving one’s knowledge or abilities rather than performance on tests), prefer high-level goals, and tend to believe that effort is an indication of learning, even in the face of failure. Indeed, when dissatisfied with their performance, incremental theorists tended to show more corrective actions than did entity theorists (Hong et al. 1999). Overall, then, incremental theorists would likely choose tasks that are challenging, such as those of high-level difficulty, as compared to entity theorists.

Past research has shown that, in fact, entity theorists tend to select easier tasks at a higher rate compared to incremental theorists. In a study by Leggett (1985), middle school students were asked to pick a few tasks to perform. Results showed a difference in the types of tasks that were selected: About 80% of students endorsing an entity theory selected performance-oriented tasks (i.e. “I like problems that aren’t too hard, so I don’t get any wrong.”), while about 60% of students endorsing an incremental theory selected learning-oriented tasks (i.e. “I like problems that are challenging.”). Similar results were obtained from college students (Mueller and Dweck 1997). Miele et al. (2013) found related results, showing that elementary-school children who endorsed more of an entity theory (as compared to an incremental theory) had lower confidence on a reading task when more effort was required, even though comprehension performance did not differ. These results suggest that even in very young children, a required increase in effort seems to be interpreted as a lack of ability, particularly in children who tend to believe in an “intelligence is fixed” mindset (see also Ehrlinger et al. 2016).

In other studies (Aronson et al. 2002; Dweck and Leggett 1988; Hong et al. 1999; Miele and Molden 2010), researchers have also been able to temporarily manipulate one’s mindset. For example, Miele and Molden (2010) showed that after reading articles describing intelligence in two ways – either fixed or flexible – participants were made to endorse temporarily an entity or incremental mind set, which, in turn influenced their judgments of comprehension regarding the text. Similarly, in a study by Hong and colleagues (1999), half of the participants read a text stating that intelligence can be improved by effort, while the other half read a passage saying that intelligence is fixed. After reading the different passages, and solving a novel difficult problem, participants were given feedback. Results showed that when negative feedback was given, participants in the incremental condition showed a more positive response than those in the entity condition. That is, being made to think in an incremental way allowed them to attempt to correct the problem. A study by Aronson and Fried (2002) showed similar results, showing that those led to believe in an incremental mindset were able to increase their performance in school compared to a no-treatment condition.

Thus, while we expected to find use of the ME strategy, there were two other potential strategies that we also considered, given effort view differences in TOI. One, and particularly for those who lean towards an incremental view, we thought that participants might tend to select the more effortful problem regardless of past experiences with failure, taking on a desirable difficulty (DD) strategy. On the other hand, particularly for those who lean towards an entity view, we considered a desirable ease (DE) strategy, where, regardless of past experiences, learners select the relatively easier, less effortful tasks. Indeed, there are some strategies that are judged to be more comfortable, even though they are less beneficial for learning, such as choosing massing over spacing (Son 2005; Son and Simon 2012) or reading over self-testing (Kornell and Son 2009). Thus, while we bet our money on the ME hypothesis, there were three strategies we considered in all:

  1. 1.

    The Metacognition-by-Experience (ME) Strategy, where task selection will depend on each individual’s own past experiences, including failures and successes.

  2. 2.

    The Desirable Difficulty (DD) Strategy, where regardless of past successes or failures, individuals, especially those holding incremental views, may tend to choose the more difficult tasks.

  3. 3.

    The Desirable Ease (DE) Strategy, where regardless of past successes or failures, individuals, especially those holding entity views, will tend to choose the easier tasks.

Current study

In the current study, we first manipulated beliefs about intelligence – towards either entity or incremental beliefs – by having participants read different texts about intelligence (as in Miele and Molden 2010). Then, participants were presented with a series of either difficult or easy “prior” tasks, before selecting the difficulty level – either easy or difficult – of the final target question. In Experiment 1, the target task was of the same type as that of the prior tasks (all problems consisted of trivia questions). In Experiment 2, the target problem was of a different type (analogy) than the prior problems (trivia). By changing the type of the target task, we were able to see whether past successes and failures would be weighted differently based on the level of perceived overlap between tasks. Finally, while the original studies – Experiments 1 and 2 – were conducted on students in South Korea, we also had the opportunity to compare our findings with participants in the US (Experiment 3). As a result, we were able to discuss, to some degree, the generalizability of our findings.

Experiment 1

Participants

Eighty-one undergraduate students (Male = 42, Female = 39, Mage = 21.63, SDage = 2.66) attending a 4-year college in South Korea participated in this experiment. For the current and all subsequent experiments, the sample size of 80 was calculated by using the G-power software, with parameters set to power of 80%, alpha level of p = .05, and effect size of .16 [based on the results of Miele et al. (2013)]. Participants signed consent forms prior to the experiment and received credit for their participation.

Materials

Theories of intelligence (TOI) manipulation

The two versions of an article created by Bergen (1991) were used to temporarily manipulate beliefs about intelligence. The article was edited to look like it had originally appeared in the November 2007 issue of Psychology Today, titled “The Origins of Intelligence: Is the Nature–Nurture Controversy Resolved?” as was done by Miele and Molden (2010), which was then translated into Korean. To create an “entity belief,” a section of the article stating that intelligence is a genetically determined attribute that changes very little over time was presented. For example, one paragraph stated,

“The brilliance of Mozart and Einstein was mostly built into them at birth. Their genius was probably the result of their DNA.”

By contrast, the “incremental belief” version stated that intelligence is an environmentally determined attribute that can be improved over time, for instance:

“The brilliance of Leonardo da Vinci and Albert Einstein was probably due to a challenging environment. Their genius had little to do with their genetic structure.”

After reading the article, participants answered three questions about their content: (1) “Please summarize the article in one sentence.”, (2) “What would you say is the most memorable example presented in the above article?”, and (3) “Please describe one of the most convincing pieces of evidence offered in this article.

Cognitive task

All of the to-be-solved problems consisted of trivia questions and were tested using a multiple-choice format. Participants first solved five easy or five difficult leading (prior) trivia questions, by random assignment. Afterwards, they solved one more trivia question, either easy or difficult. An example of an easy problem was “Which country has the largest population in the world? 1. China, 2. India, 3. United States, 4. Russia”; A difficult example was “What is the color order from the left of the Olympic rings? 1. Green-Yellow-Red-Black-Blue, 2. Yellow-Blue-Green-Red-Black, 3. Red-Green-Black-Yellow-Blue, 4. Blue-Yellow-Black-Green-Red”.

Theories of intelligence questionnaire

At the end of the experiment, participants completed an eight-item Theories of Intelligence Questionnaire (Dweck et al. 1995). The questionnaire asks participants to rate their level of agreement on a 6-point Likert scale, such as “Intelligence is something basic about a person that cannot be changed” and “No matter how much intelligence you have, you can change it quite a bit.”. This was designed to measure the individual’s relative preference for either an entity or incremental theory of intelligence. Raw TOI scores ranged from 1 (most incremental) to 6 (most entity). To focus on general differences between people who “endorsed” the two perspectives in which we were interested, we categorized individuals into two groups of entity or incremental conditions.

Design and procedure

This study, and all subsequent studies, consisted of a 2 (TOI manipulation: Entity vs. Incremental) X 2 (Leading trivia level: Easy vs. Difficult), two-factorial, between-subjects design. At the start of the experiment, participants read and signed consent forms. Then, they were randomly assigned to either the “entity” or the “incremental” version of the article, and told to read for 5 min. Next, each participant had to solve 5 trivia questions using a multiple-choice format. However, half of the participants were given easy problems while the other half were given difficult problems. After completing their 5 trivia problems, they were given a 6th problem along with the option of solving an easy or a difficult trivia problem. Each participant went through 4 blocks,Footnote 1 for a total of 24 problems consisting of repeating sets of 5 leading and 1 target trivia question. All problems were presented on the computer screen.

Results and discussion

In addition to past experience, we were primarily interested in observing how people’s ideas of intelligence – entity versus incremental – would affect the selections people made in a learning situation. First, to confirm the effect of the TOI manipulation, we conducted a t-test on the data collected from the TOI Questionnaire. On average, participants who were categorized as having an entity view (N = 41, M = 3.28, SD = .98) had an average score that was higher than those categorized has having an incremental view (N = 40, M = 2.77, SD = .95), allowing us to trust the manipulation (t = −2.40, p < .05).

A two-way ANOVA was conducted to confirm the average performance across the five leading trivia questions as a function of their difficulty by TOI group (see Supplementary Table 1). There was no significant interaction (F = .51, p = .48, η2=.01), nor a significant main effect of TOI group (F = 2.16, p = .15, η2=.03). However, there was a significant main effect of the leading trivia difficulty level (F = 2146.64, p < .001, η2=.97), indicating that for both groups, the easy trivia questions were better solved than were the difficult ones.

Our main goal was to observe the difficulty level selections of the 6th and target trivia question (see Supplementary Table 2). Results of a two-way ANOVA by difficulty level of the leading trivia and TOI group resulted in a significant interaction (F = 4.47, p < .05, η2=.06). There was also a main effect of the difficulty level of leading trivia questions (F = 11.13, p < .01, η2=.13). However, we found no significant effect of TOI group (F = .87, p = .36, η2=.01).

We then conducted a simple comparison analysis to better understand the interaction between leading problem difficulty and TOI. For the incremental group only, there was a significant difference in selection of the target item when it came to the difficulty level of the leading questions – they were more likely to select the easy target item after having experienced difficult prior items (df = 1, F = 14.67, p < .001). The entity group’s rate of selecting the easy target item did not differ as a function of prior difficulty level (df = 1, F = .76, p = .39). The incremental group was also more likely to select the easy target item as compared to the entity group, following difficult leading questions (df = 1, F = 4.58, p < .05); there was no difference between the groups following easy leading questions (df = 1, F = .71, p = .40).

As is captured in Fig. 1, the participants holding an incremental view were more likely to select an easy target problem, but only when the leading items had been difficult. This was not so for the entity participants, who selected the easy and difficult problems approximately an equal amount of time, regardless of having experienced either easy or difficult leading trivia questions. At first glance, it might seem ironic that the participants who hold an incremental view were more likely to select, at least in one context, what we might have predicted for the entity theorists – a “desirable ease” (DE) strategy (See Supplementary Table 3). On the other hand, it may be that those who hold an incremental view have a more realistic sense of their own learning. That is, if they had already experienced five difficult prior questions, they may be more aware of the unfortunate but likely fact that they may not be able to solve a similarly difficult problem in a timely manner. This view supports the ME strategy, where an individual will make decisions based on their own past experiences. However, the current data suggest that the particular strategy chosen may depend on how one views effort or intelligence. Here, participants in the entity condition seemed not to sway in either direction – they selected the easy and difficult problems about an equal number of times. What is important, however, is that the entity theorists’ choices were not impacted by past experiences – for them, the ME strategy did not obtain.

Fig. 1
figure1

Experiment 1: Selection rate of easy target trivia question conditionalized on the difficulty level of the leading trivia questions and TOI (Korean participants)

We were somewhat surprised by our finding that entity theorists did not seem to be obviously influenced by their past. At the same time, however, as described in the introduction, when learning, entity theorists have been found to react with avoidance towards feedback while incremental theorists seem to be more likely to attend to feedback (e.g. Hong et al. 1999). And, thus, it may be that avoidance of feedback is comparable to avoiding past performance, and one way the entity data may be interpreted is to conclude that entity theorists are actively selecting to ignore past experiences. Delving deeper into the literature, we found that, in fact, a handful of studies have found evidence that supports the idea that incremental individuals, or those more focused on learning goals over performance goals, are more willing to seek feedback as compared to entity individuals (Grant and Dweck 2003; Mangels et al. 2006; Waller and Papi 2017; VandeWalle 2003). For instance, Waller and Papi (2017) found that incremental subjects sought more feedback in order to improve their writing competence, while entity subjects avoided feedback, believing that any negative feedback would be received as an invalidation of their abilities. This finding is in line with an original hypothesis put forth by Dweck (2000) who said that incremental folks see feedback as an opportunity for growth while entity folks view feedback as threats to their image. Further, Mangels et al. (2006) found that when corrective feedback was presented, increased brain activity was shown only by incremental theorists and not by entity theorists. This is a fascinating finding, and we believe that taken together with the Waller and Papi finding and others, they may be relevant to what we found in our data. If incremental theorists seek feedback while entity theorists do not, this could explain why incremental choices were different depending on their past experiences, and why entity choices were not different – because in fact, the past feedback was not sought out, and, rather, avoided.

Experiment 2

The results of Experiment 1 suggest that prior experiences may influence current choices during learning, but in some unexpected ways. Following very difficult problems, participants, in particular the ones who were manipulated to feel an incremental mindset, were less likely to choose the difficult target item to solve, and instead tended to choose the easier problems. On the flipside, following easy problems, the same group went back to what is generally expected of them, the difficult choice, resulting in what looks like an ME strategy where current decisions are conditionalized on past successes and failures. The ME strategy did not pan out for the entity theorists, however, who seemed to use a strategy that lies somewhere between “desirable ease” and “desirable difficulty” – there was no differences that apparently depended on past experiences with easy or difficult items. In line with our interpretation – that perhaps entity individuals avoid feedback from the past – we predicted, going forward, that we might once more see a middling, nondifferent choice across conditions for the entity theorists.

On the whole, these results, especially the incremental results, seemed to us confusing at first glance, but allowed us to wonder, under what circumstances would incremental theorists discount past successes and failures, thereby behaving with expectedly higher effort? In our next experiment, we decided to test whether irrelevance of past experiences would bring incremental theorists from the ME strategy back to valuing high effort, and desirable difficulties. The notion that prior learning would affect current choices would assume that the prior and current tasks should be very similar or of the same type. Indeed, in Experiment 1, both the leading and target questions consisted of trivia problems taken from the same pool. If one were to base their current decision on past problems, it seems logical that only past items of similar type would have an effect. In Experiment 2, we examined whether the contribution of past experiences would, then, be diminished as the tasks became dissimilar. Our hypothesis was that, even for the incremental participants, while seeking of past challenges or failures, they would choose to exert more effort on a current task, so long as it was from a novel category. This would support the claim that if a relevant past – experience with similar tasks -- were available, the ME strategy would play out. On the other hand, if a relevant past were not available, other more typical strategies would kick in. For instance, the incremental participants, more so than the entity participants, might then switch to the Desirable Difficulty (DD) strategy, trusting their “default” belief, that effort helps learning, and vice versa – Desirable Ease (DE), or at least a non-difference (given an avoidance of feedback) – might be preferred by the entity group.

Participants

Ninety undergraduate students (Male = 44, Female = 48, Mage = 21.52, SDage = 2.54) attending a 4-year college in South Korea participated in this experiment. None had participated in Experiment 1. All signed consent forms prior beginning the experiment and received credit for their participation.

Design and procedure

The design and procedure of Experiment 2 was the same as that of Experiment 1. However, the final 6th question – the target question – was not selected from the same category pool as were the leading items. Here, while the leading questions were trivia problems (as had been in Experiment 1), the target problem was an analogy problem (e.g. Saw: Wood: Needle: X? 1. Fabric 2. Water 3. Paper 4. Iron).

Results and discussion

As before, to check the effects of the TOI manipulation, we conducted a t-test on the data from the TOI Questionnaire given at the end of the task session. We found a significant difference between entity (N = 46, M = 3.33, SD = 1.00) and incremental (N = 44, M = 2.93, SD = .70) groups (t = −2.24, p < .05), indicating that the manipulation was successful.

We then conducted a two-way ANOVA on performance of the leading trivia questions as a function of their difficulty level and TOI group (see Supplementary Table 4). There was no significant interaction between difficulty level for the leading questions and TOI groups (F = 1.24, p = .27, η2=.01). There was also no significant difference albeit a slight one in favor of the incremental group, between TOI groups (F = 3.50, p = .07, η2=.04). However, the analysis did show a significant main effect of difficulty level (F = 1855.60, p < .001, η2=.96), indicating that participants from both groups performed better on the easy trivia questions.

We then conducted a two-way ANOVA to understand our primary question – How would people select the difficulty level of the final target question, now, since it was taken from a different category of questions (analogy)? If the leading questions were somewhat different from the target questions, could they no longer matter? The descriptive statistics are in Supplementary Table 5 and Fig. 2 (also see Supplementary Table 6 for selection rate across block).

Fig. 2
figure2

Experiment 2: Selection rate of easy target analogy question conditionalized on the difficulty level of the leading trivia questions and TOI (Korean participants)

As may be inferred, there was a significant interaction between difficulty level of the leading trivia questions and TOI groups (F = 10.20, p < .01, η2=.11). However, there were no significant main effects in either difficulty level of leading trivia questions (F = .25, p = .62, η2=.00) or TOI (F = 1.39, p = .24, η2=.02). With simple comparison analyses to confirm the difficulty level selection of the final target word analogy question, we found that for the incrementally manipulated group, there was only a slight difference by leading question difficulty (df = 1, F = 3.63, p = .06), and, overall, they tended to select the difficult analogy more than half the time. These data seemingly go against the data found in Experiment 1, but given that the final target question type was the main procedural difference, we interpreted the data to mean that when the previous experience is less relevant – or less overlapping in similarity – the more general DD strategy may have kicked in. In other words, we believe that the individual who endorsed an incremental view weighed effort more heavily on a current task, given that no previous failure on a similar task had yet occurred.

We did, however, find a significant difference by difficulty level of the leading questions for the entity group as well (df = 1, F = 8.12, p < .01), who tended to select the more difficult analogy following easy questions, but flipped to selecting the easier analogy following difficult questions. These data go against their parallel data in Experiment 1 and also fly in the face of our interpretation that entity individuals avoid seeking knowledge of the past. Here, in Experiment 2, the data would suggest that the entity participants were aware of their past successes and failures, and, therefore, selected accordingly – to choose the more difficult anagrams after solving easier trivia questions [indeed, the entity group and incremental group selections did not differ (df = 1, F = 2.10, p = .15) in their choice rate following the easy leading trivia questions]. There will certainly be complicated and mixed strategies going on, and so while we do not wish to throw out our interpretation of Experiment’s 1 entity data just yet, we instead admit that further explorations on this point are necessary.

Thus far, we presented data that suggests it would be far too simple to describe those with incremental views as always following the well-known rule of “putting in effort.” While incremental theories have often been associated with a higher value of effort, our data show that the weight of past experiences – particularly failures – might initiate a metacognitive strategy that is unexpected but makes sense – dependent on past experiences, as a way of avoiding laboring in vain. In Experiment 1, we found that incremental theorists were, indeed, more likely to conditionalize their strategy on past easy versus difficult experiences, but fell back to their typical effortful strategy when the past tasks were less relevant, in Experiment 2. And for the entity theorists, while the data are inconsistent (we proposed an interpretation in Experiment 1 which was challenged in Experiment 2), we do not believe the data are illogical: There may be cases when entity individuals avoid seeking feedback from past experience, but if not, then past experiences that had more successes might, as expected, lead to a greater selection of challenging current tasks.

Overall, we believe that these data are interesting and are likely to lead to further thoughts about how and when metacognitive processes are called upon during learning. Motivated by a reviewer’s suggestion, we also were curious to see if the patterns in our data would apply more generally, to individuals living in America. To this end, we attempted to replicate Experiments 1 and 2 within a college population in the United States, in Experiment 3.

Experiment 3

Experiments 3A and 3B were conducted to see if the patterns found above would be found more generally in the US. We predicted that, overall, we would find results that were similar to those of Experiments 1 and 2, respectively. That is, we expected that when individuals were manipulated to endorse an incremental view, as compared to an entity view, they would use the ME strategy, selecting difficult items when prior knowledge was irrelevant, but easier items when prior failures were taken into account, thereby avoiding laboring in vain. Given that past research comparing East and West showed Asian students place a higher priority on effort in general (Stevenson et al. 1986; Stevenson and Stigler 1992), we thought that perhaps there might, though, be a different pattern in our data when testing American students. For instance, we wondered whether placing high priority on effort would be a sort of prerequisite for applying the ME strategy. In other words, if effort is indeed weighted less in the West as compared to the East, then do past experiences regarding effort get discounted, diminishing the ME strategy?

Experiment 3A

Methods

SixtyFootnote 2 undergraduate students (Male = 9, Female = 51, Mage = 19.75, SDage = 1.90) attending a 4-year college in New York City participated. All participants signed consent forms prior to the experiment and received credit for their participation. The design, procedures, and materials, were the same as those used in Experiment 1 – all of the leading and target questions were made up of trivia questions, allowing us to see what participants selected when their past successes and failures were relevant to the current task. The trivia questions were taken from the same pool as that of Experiment 1, and all participants were given the TOI questionnaire as before.

Results

To confirm the effect of the TOI manipulation, we conducted a t-test on the data collected from the TOI Questionnaire. On average, participants who were categorized as having an entity view (N = 30, M = 4.47, SD = .75) had an average score that was higher (t = −6.38, p < .001) than those categorized has having an incremental view (N = 30, M = 3.1, SD = .91), allowing us to trust the manipulation.

A two-way ANOVA was conducted to confirm the average performance across the leading trivia questions as a function of their difficulty by TOI group (see Supplementary Table 7). As can be seen, there was no significant interaction (F = 2.12, p = .15, η2=.04). There was also no significant main effect of TOI groups (F = .21, p = .65, η2=.00). However, there was a significant main effect of difficulty level of the leading trivia questions (F = 46.86, p < .001, η2=.46), indicating that for both groups, the easy trivia questions were better solved than were the difficult ones.

Our main interest was to observe the difficulty level selections of the 6th and target trivia question. The means can be seen in Fig. 3 (also see Supplementary Tables 8 and 9). The first thing to notice is that compared to the Korean population tested in Experiment 1, the US participants appeared to select the difficult items more often in general – going against what would be expected if one were to believe the notion that effort is traditionally more highly valued in the East than in the West. We thought that one possible reason for the finding may have been that the same set of trivia were presented in both Korea and the US, perhaps shifting the perception of what had been judged as “easy” and “difficult” – more on this at the bottom of Experiment 3B. Nevertheless, as we understand this common issue when comparing cross-culturally, we thought that the overall patterns would be informative.

Fig. 3
figure3

Experiment 3A: Selection rate of easy target trivia question conditionalized on the difficulty level of the leading trivia questions and TOI (US participants)

The results of a two-way ANOVA by difficulty level of the leading questions and TOI group resulted in a trend showing an interaction (F = 2.86, p = .10, η2=.05) in the same directions as the results of Experiment 1. There was no significant main effect of the difficulty level of leading trivia questions (F = .65, p = .43, η2=.01) or TOI group (F = .22, p = .64, η2=.00). When we conducted a simple comparison analysis between leading problem difficulty and TOI, we found that for the incremental group, although the data did not reach statistical significance, the expected numerical pattern was obtained: Those in the incremental group were somewhat more likely to select the easy target item after having experienced difficult prior items (df = 1, F = 3.10, p = .08). On the other hand, again in support of Experiment 1, the entity group’s rate of selecting the easy target item did not differ as a function of prior difficulty level (df = 1, F = .39, p = .53). On the contrary, the entity group tended to select the difficult problem a majority of the time, regardless of past experience. These data allow us to continue to consider the notion that entity individuals, when compared to incremental individuals, tend to avoid feedback available from experiencing past tasks.

From afar, the results of Experiment 3A suggest to us that the importance of past experience on current selection of study is not trivial, for both the Korean (Experiment 1) and American (current experiment) cohorts tested here. Specifically, those that were made to take the perspective of an incremental view were more likely to shift to a strategy resembling what we call the ME strategy, exerting effort on the difficult task more readily when there was a high (albeit unknown) probability of success, and forgoing the difficult for the easy when probability of success was known to be low. These data provide a hint of further evidence for the ME strategy, or the possibility that metacognitive processes are initiated when the individual learner has relevant information from past experiences. We were also intrigued to find that the entity theorists did not have different choices based on past level of difficulty, making us wonder again if, indeed, our interpretation for Experiment 1’s data might hold up – that feedback from past experiences are not given as much weight or even avoided. We discuss further below following the results of Experiment 3B.

Experiment 3B

Simultaneously with Experiment 3A, we also used the procedure in Experiment 2 to investigate whether our results would hold up in an U.S. cohort. That is, if the prior items (trivia) are less similar to the current target choices (analogies), then would participants be more likely to default to their traditionally followed strategies? In other words, would those manipulated to endorse an incremental view use the desirable difficult strategy (selecting more difficult analogies)? Given the results of all previous experiments, we were also curious to see how the entity individuals would behave.

Methods

Fifty-eightFootnote 3 undergraduate students (Male = 9, Female = 49, Mage = 20.21, SDage = 2.61) attending a 4-year college in New York City were tested. All participants signed consent forms prior to the experiment and received credit for their participation. The design, procedures, and materials, were the same as those used in Experiment 2 – the leading questions were made up of trivia while the target questions were analogies, allowing us to see what participants selected when their past successes and failures were less relevant to the current task. As before, all participants were given a TOI questionnaire.

Results

As before, to check the effects of the TOI manipulation, we conducted a t-test on the data from the TOI Questionnaire given at the end of the task session. We found a significant difference between entity (N = 28, M = 4.40, SD = 1.04) and incremental (N = 30, M = 3.00, SD = 1.08) groups (t = 5.03, p < .001), indicating that the manipulation was successful.

We then conducted a two-way ANOVA on performance of the leading trivia questions as a function of their difficulty level and TOI group (see Supplementary Table 10). A before, and as expected, we did not find a significant interaction between difficulty level for the leading questions and TOI groups (F = .48, p = .49, η2=.01). There was also no significant difference between TOI groups (F = .00, p = .99, η2=.00). We did find a significant main effect of difficulty level of the leading trivia questions (F = 58.82, p < .001, η2=.52), indicating that for both groups, the easy trivia questions were better solved than were the difficult ones.

We then conducted a two-way ANOVA to understand our primary question once again – How would people select the difficulty level of final target (analogy) question? And given that the leading questions were somewhat different from the target questions, would the effects of experience be relatively weak? The means are presented in Fig. 4 (also see Supplementary Tables 11 and 12). As can be seen, there was no significant interaction between difficulty level of leading trivia questions and TOI groups (F = .29, p = .59, η2=.01). This was not in line with our results from Experiment 2 with the Korean cohort. There were also no significant main effects for difficulty level of leading trivia questions (F = .29, p = .59, η2=.01) or for TOI groups (F = .22, p = .64, η2=.00). The simple comparison analyses also confirmed that unlike our previous findings above, past experience did not impact the selection of target analogy.

Fig. 4
figure4

Experiment 3B: Selection rate of easy target analogy question conditionalized on the difficulty level of the leading trivia questions and TOI (US participants)

We did, instead, find that, overall, individuals in this US cohort tended to select the difficult analogies more often – around 70% of the time. As we had considered in Experiment 3A, one reason for this finding might be that the leading trivia may not have been or perceived to have been, sufficiently difficult, particularly for US students, suppressing any potential influences of past experiences or TOI default strategies. By contrast, they may have been perceived as, appropriately, relatively more difficult for the Korean cohort in Experiment 2, who had read these same phrases in their non-nativeFootnote 4 language.

Post-hoc analysis on language differences

To examine whether the leading trivia questions may have been perceived differently across the two countries, we then compared the accuracy levels of the leading trivia questions, and compared the Korean and American data, collapsing across Experiments 1 and 3A (Fig. 5, left panel) – where the final questions were trivia – and Experiments 2 and 3B (Fig. 5, right panel) – where the final questions were analogies. As can be seen, while for both the Korean and American participants the difficult trivia were more difficult than the easy trivia, this gap was much more significant for the Korean participants. The US participants performed at a rate of 78.71% even on the difficult trivia, as compared to the Korean participants who performed at a rate of only about 20% on the difficult trivia. Thus, it may be that our lack of replication (or otherwise) in Experiment 3B was due to the fact that the leading questions were not sufficiently differentiable by the US participants – although they were numerically. What we might surmise, then, in the least, was that without “feeling” significant failure in the recent past, and when the past was relatively irrelevant, the US participants in Experiment 3B all tended to “go for it” – the DD strategy was found across the board. We believe that this finding is not trivial. When strings of successes are achieved, perhaps there is an increase in overall confidence going forward. The question of discriminability of past successes and failures as guiding current and ongoing choices is fascinating, but, point to the fact that on the whole, metacognitive decisions are extremely complex and merit continuous examination.

Fig. 5
figure5

Accuracy rate of the leading trivia questions collapsing the Korean (Experiment 1) and American participants (Experiment 3A) in the left panel (when the final questions were trivia), and collapsing the Korean (Experiment 2) and American participants (Experiment 3B) in the right panel (when the final questions were analogies)

General discussion

In this paper, we attempted to highlight the complicated nature of metacognitive choices that people make. While many factors (Bjork et al. 2013; Dweck et al. 1995; Finn 2010; Koriat 2018; Koriat and Ackerman 2010; Rhodes and Castel 2008; Sternberg 2000), both known and unknown, will influence people’s behaviors in various ways, we focused on two: (1) Past experiences: Difficult versus easy, and (2) TOI: Incremental versus entity perspectives. Data from the first two experiments, where we tested college students living in Korea, tell us that just prior experiences – such as successes and failures – may have an impact on people’s current choices, and that those choices might also depend on how one views intelligence and effort. Specifically, while incremental and entity general default strategies may include selecting high effort and low-effort strategies, respectively, those strategies seemed to shift based on the individual’s past experiences.

We found a pattern here that may be interpreted as the following: Those endorsing an incremental perspective value effort, and will take on difficult tasks, but if they had already encountered similar difficult tasks or failure, they seem to then resort to easier tasks, presumably so as to avoid laboring in vain. We describe this strategy as a metacognitively driven one, acknowledging that there are times when desirable difficulty is, rightfully, undesired. The data here, especially in Experiment 1 for Korean students – and Experiment 3A, in which we found a numerical pattern in the same directions for US students – support this idea.

We found, on the flipside, that entity theorists – in particular the Korean participants – tended not to shift strategies – their choices did not change as a function of past experiences with difficult versus easy tasks, despite past experiences being relevant. This result was somewhat surprising, and merits further investigation for sure. One interpretation – that we pushed perhaps too aggressively in the discussion of Experiment 1 – could be that incremental theorists put more weight on efforts exerted in the past, as well as on current tasks. That is, we proposed that incremental theorists seemed to be taking careful stock of past successes and failures while entity theorists may have avoided past feedback. Unfortunately, in Experiment 2, our interpretation was rejected: Even entity theorists seemed to select tasks conditionalized on past experience. Given that we did, however, find the same nondifference in the US entity participants in Experiment 3A, we are not quite willing to throw out our original interpretation. We believe, instead, that such decisions are very complex, and only much more data will provide a clearer picture.

We also examined relevance as a factor that might impact how one might “take stock” of the past. In Experiments 1 and 3A, the procedure we used to examine the effects of prior experience was to present a series of 5 leading trivia questions to the participants, immediately before given them a choice – difficult or easy trivia question – to make. However, in Experiments 2 (Korean cohort) and 3B (U.S. cohort), we changed the relevance of the past experiences. We asked whether, if past experiences are unrelated to the current choice selection, monitoring of past efforts and consequences need not apply? In other words, rather than initiating monitoring and control processes, would more general default strategies such as DD and DE ensue? To test this, participants were given the 5 trivia questions – either easy or difficult – but were then presented with a final target question that differed in type: an analogy. Interestingly, but perhaps not surprisingly given how we interpreted the data from Experiment 1, the incremental participants in the Korean cohort now were more likely to select the difficult analogy. We guessed, in other words, that past failures on the trivia were not related to the current analogy, so there would be no sufficient reason to think they might fail on the current novel problem. We did not replicate this finding on the U.S. cohort, but believe that perhaps the trivia were not thought to be sufficiently difficult in the current study (Indeed, in our final analysis offered at the end of Experiment 3, the US participants performed at 78% on even the difficult items, suggesting that the experience of failure was simply not salient). So, while we have what we believe to be a coherent interpretation for the Korean participants, further research on different populations with more challenging materials would help confirm the generalizability question of this particular aspect of the data. Furthermore, an additional idea – whether there is a threshold for the discriminability of past successes and failures – would be a worthwhile plan for future examination.

The most interesting finding, which we already highlighted above, was that the patterns we found were not what we might intuitively think. Intuitively, we might have assumed that those in the incremental group (as compared to the entity group) would be more likely to select the more effortful problem – following the advice of desirable difficulties. After all, incremental thinkers typically endorse an effort-helps-learning mindset. On the contrary, our data showed that the incremental participants were more likely to select the easier problems, but only after having experienced difficult leading problems. The entity group’s data were more inconsistent. In Experiment 1, they did not show a significant difference either way – their selections did not differ with prior experience. In Experiment 2, however, we found that the entity theorists were more likely to select the difficult analogies, but only after solving the easy trivia problems. We proposed that in one situation, entity theorists may not be in tune with past feedback, whereas in other situations, they may be looking more like the incremental theorists. We admit that these thoughts will need much more investigation beyond the current data.

E. L. Bjork and Bjork stated the following: “Desirable difficulties, versus the array of undesirable difficulties, are desirable because they trigger encoding and retrieval processes that support learning, comprehension, and remembering. If, however, the learner does not have the background knowledge or skills to respond to them successfully, they become undesirable difficulties.” (2011, p. 58, italics added). This may be true, but in the current research, we provide the beginning of an idea that shows that desirable difficulties can become undesirable not when the learner “does not have background knowledge”, but rather, when the (incremental) learner has additional background knowledge – the knowledge of their past experiences. While unintuitive at first, we found that people who hold an incremental view may not simply exert effort at all times unconditionally. Rather, they might be more attuned to how past efforts impacted learning. In other words, if they had exerted effort on a series of (difficult) problems already, but experienced a good amount of laboring in vain, then they might be aware that attempting a similarly difficult problem would likely result in failure. Thus, they might choose, quite intelligently, to solve the easy, and not the difficult, problem.

Consequences of effort are not easy to monitor. As mentioned in the introduction, sometimes effort leads to success; other times it leads to failure. We posit that those who believe that effort can potentially lead to good learning – namely the incremental theorists – are more likely to catch opportunities that would help them understand how effort works. If, like the entity group, effort was thought to be of no consequence, monitoring the effects of effort might be less likely to be “practiced” and, as a result, less likely to be monitored. That is, we believe that there might be a deeper difference between the two TOI groups: Perhaps only the incremental group puts weight on effort, leading to a more intentional metacognitive monitoring process of past efforts and failures. In the end, the monitoring of effort could, we believe, lead to metacognitive control, in the direction of avoiding laboring in vain by selecting the easier current problems. This interpretation will have important implications for study-time allocation models (Metcalfe 2009; Metcalfe and Finn 2013; Son and Kornell 2008), where allocation decisions have, for the most part, considered current levels of difficulty – i.e. discrepancy-reduction and RPL. The data and discussion here suggest that allocation models may need to be updated to include not only TOI ranges, but also experiences with difficulty and ease in the recent, and relevant past. While inclusion of past experience would complicate such models, they would also allow us to move closer to mimicking real-world learning selections, which have no end when it comes to complexity.

Overall, the data here highlight people’s views on effort, and the impact of past experiences. In addition, the data allow us to think much more deeply about how metacognition is defined. In the most common sense, metacognition is knowing how much we know and don’t know, presumably to fill any gaps in knowledge – if we are aware of not knowing, we should continue to study. Here, however, we find an interesting case of potential laboring in vain, and a particular scenario where incremental individuals choose to avoid laboring in vain. And we interpret this as a “ME,” or metacognition-by-experience, strategy, where heightened awareness of past failures and their relation to effort is a deliberate strategy, hinting not at “avoiding effort” but rather of knowing or admitting that sometimes effort might not be sufficient. Only when past experience, or better yet, relevant experience, is unavailable, do individuals then seem to use a more general metacognitive strategy, using either the more predictable desirable difficulties “DD” or desirable ease “DE” strategies that we have discussed here.

Learning comes in many forms. Accordingly, metacognitive choices are extremely difficult to understand. After all, these processes are only accessible by the “privileged” individual, that is, the self. Still, in these experiments and in ongoing research, we have begun to understand the complexities and to test the various factors that might come into play. How people use their past experiences and their unique dispositions are only the start of understanding study selections when yearning to learn.

Notes

  1. 1.

    Performance did not change across blocks in any of the experiments, so they are not included in the analyses.

  2. 2.

    We had planned to test 80 participants according to the g-power calculations, but was able to collect data from 60 participants before the covid-19 pandemic hit and stopped human subjects testing. An analysis of our data at this point, however, allowed us to see a few general patterns we thought would be interesting and important as a check for the previous experiments. Thus, we made the choice to include this experiment in our report.

  3. 3.

    We had planned to test 80 participants according to the g-power calculations, but was able to collect data from 58 participants before the covid-19 pandemic hit and stopped human subjects testing. As in Experiment 3A, an analysis of our data at this point nevertheless allowed us to understand our data and help interpret prior data. Thus, we made the choice to include this experiment in our report.

  4. 4.

    While the materials were not presented in their native language, all Korean students are rigorously educated in the English language, and we saw no issues or obstacles when carrying out Experiments 1 and 2 in Korea. On the contrary, for future studies, the difficulty level should be adjusted, i.e. made more difficult perhaps, for the US participants.

References

  1. Aronson, J., Fried, C. B., & Good, C. (2002). Reducing stereotype threat and boosting academic achievement of African-American students: The role of conceptions of intelligence. Journal of Experimental Social Psychology, 38(2), 113–125.

    Article  Google Scholar 

  2. Bae, J. (2016). Effect of the difficulty of prior task on confidence and resolution for subsequent task. Suwon: Ajou University, Doctoral dissertation.

  3. Bergen, R. S. (1991). Beliefs about intelligence and achievement-related behaviors. Urbana: University of Illinois.

    Google Scholar 

  4. Bjork, R. A. (1994). Memory and metamemory considerations in the. In Metacognition: Knowing about knowing (pp. 185–204). Cambridge: MIT Press.

    Google Scholar 

  5. Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, 2(59–68), 55–64.

    Google Scholar 

  6. Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444.

    Article  Google Scholar 

  7. Briñol, P., Petty, R. E., & Tormala, Z. L. (2006). The malleable meaning of subjective ease. Psychological Science, 17(3), 200–206.

    Article  Google Scholar 

  8. Dunlosky, J., & Matvey, G. (2001). Empirical analysis of the intrinsic–extrinsic distinction of judgments of learning (JOLs): Effects of relatedness and serial position on JOLs. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(5), 1180.

    Google Scholar 

  9. Dweck, C. S. (2000). Self-theories: Their role in motivation, personality, and development. London: Psychology Press.

    Google Scholar 

  10. Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95(2), 256–273.

    Article  Google Scholar 

  11. Dweck, C. S., Chiu, C.-Y., & Hong, Y.-Y. (1995). Implicit theories and their role in judgments and reactions: A word from two perspectives. Psychological Inquiry, 6(4), 267–285.

    Article  Google Scholar 

  12. Ehrlinger, J., Mitchum, A. L., & Dweck, C. S. (2016). Understanding overconfidence: Theories of intelligence, preferential attention, and distorted self-assessment. Journal of Experimental Social Psychology, 63, 94–100.

    Article  Google Scholar 

  13. Finn, B. (2010). Ending on a high note: Adding a better end to effortful study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(6), 1548.

    Google Scholar 

  14. Finn, B., & Metcalfe, J. (2007). The role of memory for past test in the underconfidence with practice effect. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(1), 238.

    Google Scholar 

  15. Grant, H., & Dweck, C. S. (2003). Clarifying achievement goals and their impact. Journal of Personality and Social Psychology, 85(3), 541–553.

    Article  Google Scholar 

  16. Hertzog, C., Kidder, D. P., Powell-Moman, A., & Dunlosky, J. (2002). Aging and monitoring associative learning: Is monitoring accuracy spared or impaired? Psychology and Aging, 17(2), 209–225.

    Article  Google Scholar 

  17. Hong, Y.-Y., Chiu, C.-Y., Dweck, C. S., Lin, D. M.-S., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social Psychology, 77(3), 588.

    Article  Google Scholar 

  18. Koriat, A. (1997). Monitoring one’s own knowledge during study: A cue-utilization approach to judgments of learning. Journal of Experimental Psychology: General, 126(4), 349–370.

    Article  Google Scholar 

  19. Koriat, A. (2018). When reality is out of focus: Can people tell whether their beliefs and judgments are correct or wrong? Journal of Experimental Psychology: General, 147(5), 613–631.

    Article  Google Scholar 

  20. Koriat, A., & Ackerman, R. (2010). Choice latency as a cue for children’s subjective confidence in the correctness of their answers. Developmental Science, 13(3), 441–453.

    Article  Google Scholar 

  21. Koriat, A., Nussinson, R., & Ackerman, R. (2014). Judgments of learning depend on how learners interpret study effort. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(6), 1624.

    Google Scholar 

  22. Kornell, N., & Hausman, H. (2017). Performance bias: Why judgments of learning are not affected by learning. Memory & Cognition, 45(8), 1270–1280.

    Article  Google Scholar 

  23. Kornell, N., & Son, L. K. (2009). Learners’ choices and beliefs about self-testing. Memory, 17(5), 493–501.

    Article  Google Scholar 

  24. Labroo, A. A., & Kim, S. (2009). The “instrumentality” heuristic: Why metacognitive difficulty is desirable during goal pursuit. Psychological Science, 20(1), 127–134.

    Article  Google Scholar 

  25. Leggett, E. L. (1985). Children’s entity and incremental theories of intelligence: Relationships to achievement behavior. Paper presented at the annual meeting of the Eastern Psychological Association, Boston.

  26. Mangels, J. A., Butterfield, B., Lamb, J., Good, C., & Dweck, C. S. (2006). Why do beliefs about intelligence influence learning success? A social cognitive neuroscience model. Social Cognitive and Affective Neuroscience, 1(2), 75–86.

    Article  Google Scholar 

  27. Mazzoni, G., & Nelson, T. O. (1995). Judgments of learning are affected by the kind of encoding in ways that cannot be attributed to the level of recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(5), 1263.

    Google Scholar 

  28. Metcalfe, J. (2002). Is study time allocated selectively to a region of proximal learning? Journal of Experimental Psychology: General, 131(3), 349–363.

    Article  Google Scholar 

  29. Metcalfe, J. (2009). Metacognitive judgments and control of study. Current Directions in Psychological Science, 18, 159–163.

    Article  Google Scholar 

  30. Metcalfe, J., & Finn, B. (2008). Evidence that judgments of learning are causally related to study choice. Psychonomic Bulletin & Review, 15(1), 174–179.

    Article  Google Scholar 

  31. Metcalfe, J., & Finn, B. (2013). Metacognition and control of study choice in children. Metacognition and Learning, 8(1), 19–46.

    Article  Google Scholar 

  32. Metcalfe, J., & Kornell, N. (2003). The dynamics of learning and allocation of study time to a region of proximal learning. Journal of Experimental Psychology: General, 132(4), 530–542.

    Article  Google Scholar 

  33. Metcalfe, J., & Miele, D. B. (2014). Hypercorrection of high confidence errors: Prior testing both enhances delayed performance and blocks the return of the errors. Journal of Applied Research in Memory and Cognition, 3(3), 189–197.

    Article  Google Scholar 

  34. Metcalfe, J., & Xu, J. (2018). Learning from one’s own errors and those of others. Psychonomic Bulletin & Review, 25(1), 402–408.

    Article  Google Scholar 

  35. Metcalfe, J., Schwartz, B. L., & Joaquim, S. G. (1993). The cue-familiarity heuristic in metacognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19(4), 851.

    Google Scholar 

  36. Miele, D. B., & Molden, D. C. (2010). Naive theories of intelligence and the role of processing fluency in perceived comprehension. Journal of Experimental Psychology: General, 139(3), 535–557.

    Article  Google Scholar 

  37. Miele, D. B., Finn, B., & Molden, D. C. (2011). Does easily learned mean easily remembered? It depends on your beliefs about intelligence. Psychological Science, 22(3), 320–324.

    Article  Google Scholar 

  38. Miele, D. B., Son, L. K., & Metcalfe, J. (2013). Children’s naive theories of intelligence influence their metacognitive judgments. Child Development, 84(6), 1879–1886.

    Article  Google Scholar 

  39. Mueller, C., & Dweck, C. (1997). Implicit theories of intelligence: Malleability beliefs, definitions, and judgments of intelligence. Unpublished data cited in: Dweck, CS (1999). In Self-theories: Their role in motivation, personality and development. Philadelphia: Psychology Press.

    Google Scholar 

  40. Rhodes, M. G., & Castel, A. D. (2008). Memory predictions are influenced by perceptual information: Evidence for metacognitive illusions. Journal of Experimental Psychology: General, 137(4), 615–625.

    Article  Google Scholar 

  41. Rhodewalt, F. (1994). Conceptions of ability, achievement goals, and individual differences in self-handicapping behavior: On the application of implicit theories. Journal of Personality, 62(1), 67–85.

    Article  Google Scholar 

  42. Roediger III, H. L., Agarwal, P. K., Kang, S. H., & Marsh, E. J. (2009). Benefits of testing memory: Best practices and boundary conditions. In Current issues in applied memory research (pp. 27–63). London: Psychology Press.

    Google Scholar 

  43. Roediger III, H. L., McDermott, K. B., & McDaniel, M. A. (2011). Using testing to improve learning and memory. Psychology and the real world: Essays illustrating fundamental contributions to society, 65–74.

  44. Son, L. K. (2005). Metacognitive control: Children’s short-term versus long-term study strategies. The Journal of General Psychology, 132(4), 347–364.

    Article  Google Scholar 

  45. Son, L. K., & Kornell, N. (2008). Research on the allocation of study time: Key studies from 1890 to the present (and beyond). A handbook of memory and metamemory, 333–351.

  46. Son, L. K., & Metcalfe, J. (2000). Metacognitive and control strategies in study-time allocation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(1), 204.

    Google Scholar 

  47. Son, L. K., & Sethi, R. (2006). Metacognitive control and optimal learning. Cognitive Science, 30(4), 759–774.

    Article  Google Scholar 

  48. Son, L. K., & Sethi, R. (2010). Adaptive learning and the allocation of time. Adaptive Behavior, 18(2), 132–140.

    Article  Google Scholar 

  49. Son, L. K., & Simon, D. A. (2012). Distributed learning: Data, metacognition, and educational implications. Educational Psychology Review, 24(3), 379–399.

    Article  Google Scholar 

  50. Sternberg, R. J. (2000). Handbook of intelligence. New York: Cambridge University Press.

    Google Scholar 

  51. Stevenson, H. W., & Stigler, J. W. (1992). The learning gap. New York: Simon & Schuster.

    Google Scholar 

  52. Stevenson, H. W., Lee, S-Y., & Stigler, J. W. (1986). Mathematics achievement of Chinese, Japanese, and American children. Science, 231, 693–698.

    Article  Google Scholar 

  53. VandeWalle, D. (2003). A goal orientation model of feedback-seeking behaviour. Human Resource Management Review, 13, 581–604.

    Article  Google Scholar 

  54. Waller, L., & Papi, M. (2017). Motivation and feedback: How implicit theories of intelligence predict L2 writers’ motivation and feedback orientation. Journal of Second Language Writing, 35, 54–65.

    Article  Google Scholar 

  55. Yue, C. L., Bjork, E. L., & Bjork, R. A. (2013). Reducing verbal redundancy in multimedia learning: An undesired desirable difficulty? Journal of Educational Psychology, 105(2), 266–277.

    Article  Google Scholar 

Download references

Funding

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2016S1A5B5A07919959).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Lisa K. Son.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

ESM 1

(PDF 140 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bae, J., Hong, Ss. & Son, L.K. Prior failures, laboring in vain, and knowing when to give up: Incremental versus entity theories. Metacognition Learning (2020). https://doi.org/10.1007/s11409-020-09253-5

Download citation

Keywords

  • Labor in vain
  • Desirable difficulty
  • Metacognition
  • Past failures
  • Theories of intelligence
  • Desirable ease