What is the minimum amount of money I am willing to accept for my car?

Have I consumed more than 2,000 calories today?

How long will it take to drive from Chicago to Minneapolis?

The questions above exemplify the regularity with which people make numeric estimates. A large body of research has demonstrated that, although people are sometimes fairly accurate when making numeric estimates, these estimates are influenced by a variety of factors, including a person’s domain-specific knowledge, mood, motivation, the availability of new information, and the application of heuristics (e.g., Brown & Siegler, 1993; Englich & Soder, 2009; LaVoie, Bourne, & Healy, 2002; Simmons, LeBoeuf, & Nelson, 2010). A particularly robust and well-known bias that is relevant to numeric estimation is the anchoring effect. In an initial demonstration, Tversky and Kahneman (1974) had participants judge whether the percentage of African countries in the United Nations was higher or lower than a supposedly random number—that is, an anchor. Participants then estimated the percentage of African countries in the UN. When the anchor value was 65 %, participants gave higher estimates than when the anchor value was 10 %. Similar anchoring effects have been found in a wide variety of situations, including ratings of university professors (Thorsteinson, Breier, Atwell, Hamilton, & Privette, 2008), salary negotiations (Thorsteinson, 2011), and medical judgments (Brewer, Chapman, Schwartz, & Bergus, 2007).

An important point for the present article is that the biasing influence of anchors is not something that plagues only novices or nonexperts. A number of studies have shown that people with high and low levels of knowledge are both influenced by anchors (e.g., Brewer et al., 2007; Englich, 2008; Englich, Mussweiler, & Strack, 2006; Northcraft & Neale, 1987). For example, legal experts and nonexperts exhibited similar anchoring effects when making decisions about hypothetical cases (Englich et al., 2006). Likewise, experienced real-estate agents’ and undergraduate students’ estimates of home prices were equally influenced by comparisons with anchor values (Northcraft & Neale, 1987).

However, the full relationship between anchoring and knowledge level is not entirely clear. Alongside studies suggesting that knowledge level does not moderate anchoring effects are others suggesting that high-knowledge people are less influenced by anchors than are their low-knowledge counterparts (e.g., Mussweiler & Englich, 2003; Mussweiler & Strack, 2000; Smith, Windschitl, & Bruchmann, 2013; Wilson, Houston, Etling, & Brekke, 1996). For example, having more experience with the cost of items in a particular currency led to a decrease in anchoring effects (Mussweiler & Englich, 2003). Similarly, Smith et al. measured anchoring effects across many domains within the same study and found that the effects tended to be greatest in domains in which people had the least knowledge.

A general goal motivating the present work was to better understand the influence of knowledge on susceptibility to anchoring effects. A key supposition guiding the work was that in order to generate an adequate understanding of whether and when knowledge moderates or protects people from anchoring effects, we must factor in a distinction between two types of knowledge—metric knowledge and mapping knowledge (Brown & Siegler, 1993).

Metric versus mapping knowledge in a framework for quantitative estimation

Imagine that Jonah is estimating the population of Germany. Jonah’s estimate will be influenced by a variety of factors, including his knowledge, contextual influences (e.g., anchors, motivation), and his use of heuristics (e.g., the familiarity with the target). Brown and Siegler (1993; see also Brown, 2002; von Helversen & Rieskamp, 2008) developed a framework of quantitative estimation for addressing how people might go about making quantitative estimates. An integral feature of Brown and Siegler’s framework is the distinction made between two types of knowledge that one might have about a target. Mapping knowledge refers to how items compare with one another (e.g., Germany is larger than Norway, but smaller than the US). Metric knowledge refers to the general statistical properties (e.g., mean, range) that items tend to have (e.g., with a few exceptions, country populations tend to be more than 1 million and less than 500 million).

Continuing with the example of Jonah, one way he might go about making his estimate is first to think about how Germany compares to most other countries. Is Germany a small, medium, or large country? Then, Jonah might think about the general range of country populations. Is Germany in the range of 1–20 million, 20–100 million, or 100–500 million? Using Brown and Siegler’s (1993) terminology, Jonah’s estimate will reflect both his mapping and metric knowledge. In order for Jonah to make an accurate estimate, he would need to be knowledgeable about how Germany relates to other countries and the general populations of countries. If either property is unknown or biased in some way, Jonah’s estimate will likely not be accurate.

How mapping versus metric knowledge might matter for anchoring research

With regard to the impact of knowledge on anchoring effects, the distinction between metric and mapping knowledge seems critical. In fact, our main expectation in setting up our studies was that only metric and not mapping knowledge mitigates the impact of anchors on estimates, since having accurate metric knowledge would help people know whether an anchor is too high or too low. For example, if a person knows that the populations of large European countries tend to be between 20 and 140 million, he or she would know to give an estimate lower than an anchor value of 250 million. Without that knowledge, the person might adjust in the wrong direction relative to the anchor, leading to increased anchoring effects (see Simmons et al., 2010, for a demonstration of reduced anchoring effects when the direction of adjustment is known). Furthermore, beyond simply knowing whether to give an estimate above or below an anchor, a person with high metric knowledge would have a better grasp of the distribution of values within the range. The person in our example might also know that the vast majority of European countries have populations well below 70 million. Therefore, this person would know not only that an anchor is too high, but also that an accurate answer is likely to be less than 70 million. This would lead to an estimate farther away from the anchor (i.e., a smaller anchoring effect).

In contrast to our description of how metric knowledge might plausibly mitigate anchoring effects, a similar case for pure mapping knowledge is hard to envision. Imagine a person with no metric knowledge but good mapping knowledge about European countries. The fact that this person might know that Germany is larger than France but smaller than Russia gives him or her little insight into the actual population of Germany—leaving that person open to being heavily influenced by comparisons with anchors. In short, there is a good rationale for expecting that metric knowledge, but not mapping knowledge, buffers people against anchoring effects.

However, a viable alternative to this position is that even mapping knowledge buffers people against anchoring effects. It may be that either metric or mapping knowledge could supply people with confidence or a self-perceived rationale about rejecting the provided anchor and adjusting far away from it. Imagine a person with low mapping knowledge and just enough metric knowledge to know that a provided anchor is too high. Without mapping knowledge, this person might not have much of a self-perceived rationale for adjusting far away from the anchor, and would then settle on an estimate close to the anchor (producing large anchoring effects). Another person with high rather than low mapping knowledge might have a sense of confidence about rejecting the provided anchor and adjusting far away from it. Furthermore, the mapping knowledge might provide a specific rationale for extended adjustment. Namely, when the person’s mapping knowledge indicates that an estimate should be relatively small, the person would be inclined to make large adjustments from a high anchor (conducive to smaller anchoring effects). For example, if this person has the mapping knowledge that Estonia is smaller than most other European countries, this person might make a larger adjustment from an obviously high anchor than if he or she had no knowledge of Estonia’s relative standing.

Previous anchoring findings vis-à-vis the metric/mapping distinction

Previous investigations into the relationship between knowledge and anchoring effects have not made a distinction between these two types of knowledge; researchers have often measured knowledge by asking a single “knowledge” question (e.g., Critcher & Gilovich, 2007; Wilson et al., 1996). It is likely that some people rated themselves as having high knowledge because they were knowledgeable about metric information, whereas others rated themselves as having high knowledge because they were knowledgeable about mapping information. Studies that compare experts to nonexperts have assumed that experts are more knowledgeable than nonexperts but have never assessed whether the experts had higher metric or mapping knowledge than the nonexperts (e.g., Englich et al., 2006).

Among the small set of anchoring studies that have manipulated knowledge (e.g., Englich, 2008; Smith et al., 2013), two have used manipulations that could be viewed as being specific to metric knowledge. First, Mussweiler and Strack (2000) demonstrated that participants who were led to believe that “Xiang Long” was a person exhibited smaller anchoring effects than did participants who did not know what category (e.g., person, cultural possession, or location) “Xiang Long” belonged to. Second, Simmons et al. (2010) manipulated whether or not participants knew whether an anchor was too high or too low. Participants who knew the correct direction to adjust exhibited smaller anchoring effects than did those who were not provided this information. In both of these studies, it could be argued that the participants who exhibited smaller anchoring effects were those who had better metric knowledge. In the study by Mussweiler and Strack (2000), those who knew that “Xiang Long” was a person had some metric knowledge, whereas those who did not know the category of “Xiang Long” had no metric knowledge about the target. Similarly, the participants in the study by Simmons et al. (2010) who were told whether the anchor was too high or too low had better metric knowledge than those who were not given this information. Both of these studies provided support for our contention that increased metric knowledge leads to decreased anchoring effects. However, we assume that metric knowledge will provide more of a benefit than simply knowing which way to adjust from the anchor, although this alone can decrease anchoring effects. Furthermore, neither of these studies measured or manipulated mapping knowledge. Therefore, it remains unclear whether mapping knowledge, perhaps in combination with metric knowledge, can help to mitigate anchoring effects.

Although it is possible that mapping knowledge can help decrease the impact of anchors, there is empirical evidence to the contrary. Ariely, Loewenstein, and Prelec (2003) had business school students indicate the maximum amount of money they would pay for numerous items, with the knowledge that they might have to pay that amount for the items. Before providing their willingness to pay (WTP), the students compared their WTP to the last two digits of their social security number. Consistent with other anchoring research, the students’ WTP assimilated toward the last two digits of their social security numbers (but see Fudenberg, Levine, & Maniadis, 2012, for similar studies that failed to produce anchoring effects). The students’ estimates, though biased, were also ordered sensibly. For example, students generally reported being willing to pay more for a keyboard than for a mouse, regardless of their social security number (a similar pattern was found for rare vs. average bottles of wine). Presumably, the students had a good idea that a keyboard costs more than a mouse (i.e., they had good mapping knowledge), but they still showed robust anchoring effects because they did not know how much computer accessories tend to cost (i.e., they had poor metric knowledge). The research by Ariely et al. suggests that having good mapping knowledge does not mitigate anchoring effects. However, their study did not explicitly test this idea, nor did it investigate whether increased metric knowledge might help people overcome the biasing influence of anchors.

Present studies

We conducted two studies on the influences of metric and mapping knowledge on susceptibility to anchoring effects. In Study 1, the two types of knowledge were comanipulated in a learning phase, but our use of old and new test items in a subsequent anchoring task allowed for conclusions about the importance of metric versus mapping knowledge for mitigating anchoring effects. In Study 2, we discretely manipulated the two types of knowledge. We expected that metric knowledge, rather than mapping knowledge, would prove critical for mitigating anchoring effects in both studies.

Study 1

Because our studies involved a manipulation of knowledge, we chose a domain for which our participants started with relatively limited knowledge—the populations of African countries. The procedures of Study 1 were borrowed from research on seeding the knowledge base (e.g., Brown & Siegler, 1993, 1996, 2001; Friedman & Brown, 2000; LaVoie et al., 2002). There were a learning phase and testing phase. During the learning phase, participants in what we will call the full-knowledge condition acquired metric and mapping knowledge about the populations of African countries. Participants in a no-knowledge condition acquired neither metric nor mapping knowledge; they instead learned the capital cities of African countries.

During the test phase, participants made estimates about the populations of African countries after considering high or low anchors. Critically, participants made estimates about countries from the learning phase (“old countries”) and countries not from the learning phase (“new countries”). A somewhat obvious prediction is that, for estimates regarding the populations of “old” countries, the full-knowledge participants would exhibit smaller anchoring effects than the no-knowledge participants. More important was the prediction that the same pattern would hold for “new” countries. That is, learning the population of some countries would make participants less susceptible to anchoring effects when making estimates about other countries that they had not seen before.

Our prediction was based on the assumption that viewing populations in the learning phase provides a base of metric knowledge relevant to other African countries. Therefore, the metric knowledge could be helpful in making good estimates—less biased by anchors—for even the new countries that were not in the learning phase. In addition to assessing anchoring effects, we also assessed two types of general accuracy in order to separately measure metric knowledge and mapping knowledge. This allowed us to make attributions about whether reductions in the anchoring effects for new countries (among full-knowledge participants) were due to the gain of metric knowledge, mapping knowledge, or both.

Method

Participants and design

Fifty-two students in an introductory psychology course participated as partial fulfillment of a research requirement. The study was based on a 2 (Knowledge Condition: full vs. no knowledge) × 2 (Anchor: high vs. low) × 2 (Country List: old vs. new) mixed design. Anchor and Country List were within-subjects factors.

Materials

In the test phase, participants made estimates for one of two lists of 12 countries (Lists A and B). These lists were created such that the means and distributions of the country populations were roughly equal to one another.

Procedure

The participants were told that the experiment had two phases. In the first phase, they reviewed information about numerous countries and were informed that this information would be useful in the second phase of the study.

During the learning phase, the participants in both the full- and no-knowledge conditions learned information about 12 countries. Approximately half of the participants saw List A during the learning phase, and the other half saw List B. This counterbalanced factor (i.e., seeing List A or List B in the learning phase) did not significantly influence the results. Participants in the full-knowledge condition were shown the list of 12 countries and their populations. The list was displayed in a descending order, to emphasize how the countries compared to one another. Participants were, therefore, able to clearly ascertain metric and mapping knowledge from the list. The participants in the no-knowledge condition were shown the list of 12 countries and their capital cities (these pairs were displayed in a random order). The participants in both conditions had 2 min to study the information.

Immediately after the learning phase, all participants indicated how knowledgeable they were about country populations and capital cities using 7-point scales (1 = not at all knowledgeable, 7 = extremely knowledgeable). The participants were asked to take what they had learned during the study into account when answering the knowledge questions.

In the testing phase of the study, the participants answered 12 anchoring questions about the populations of 12 countries. They made estimates about six countries following a high anchor and six countries following a low anchor. The participants read that they would answer questions about the populations of African countries—some of which they had previously seen and some of which were new—and that they would compare the populations to a “randomly determined” and “completely arbitrary” value. The anchor values were described as random and arbitrary in order to reduce the possibility that the anchors would be viewed as informative (Schwarz, 1994).

For each anchoring question, the participants were first asked whether the population was more or less than the anchor (e.g., “Is the population of Somalia more or less than 2 million people?”). The anchor values used were 2 million (low anchor) and 150 million (high anchor). Next, the participants estimated the population of the country (e.g., “What is the population of Somalia?”). The order in which the anchors were displayed (i.e., six high and then six low, or six low and then six high) was counterbalanced across participants. Half of the countries that were asked about in this phase were countries that the participants had seen in the learning phase (e.g., countries from List A if that was the list that participants had seen during the learning phase), and the other half were new countries (e.g., countries from List B if they had seen List A during the learning phase). The order of presentation of new and old countries was randomized for each participant. In total, the participants saw three old and three new countries with high anchors, and three old and three new with low anchors.

Results

Anchoring effects

One participant was dropped from the analyses because his or her estimates indicated that he or she was not attempting to give accurate answers. The primary analysis concerned how the participants’ estimates were influenced by comparisons with the anchors (see Appendix A for the median estimates of each country). To investigate the anchoring effects, we calculated the signed order of magnitude error (SOME) for each estimate.Footnote 1 The SOME is defined as

$$ \mathrm{SOME}={ \log}_{10}\left(\mathrm{Estimated}\;\mathrm{Value}/\mathrm{Actual}\;\mathrm{Value}\right). $$

The SOME provides a measure of error that is presented in terms of the order of magnitude of the error (Nickerson, 1980). For example, if the actual value is 10, estimates of 1, 5, 10, 20, and 100 would result in SOME values of –1.0, –0.3, 0, 0.3, and 1.0, respectively. A negative SOME value indicates underestimation, whereas a positive SOME value indicates overestimation. The SOME measure is useful because it minimizes the effect of outliers—a common problem when studying domains that participants are unfamiliar with (Brown, 2002). With regard to anchoring effects, we expected that the SOME values from estimates following low anchors would be smaller than the SOME values from estimates following high anchors.

From the individual SOME values, we computed four SOME averages for each participant—one for the three responses to the old countries following a low anchor, one for the three old countries following a high anchor, one for the three new countries following a low anchor, and one for the three new countries following a high anchor. These average SOME values were then analyzed in a 2 (Knowledge Condition) × 2 (Country List: old or new countries) × 2 (Anchor) analysis of variance (ANOVA). We found a significant anchoring main effect, F(1, 49) = 160.64, p < .001, η p 2 = .76; participants gave higher estimates following a high anchor than following a low anchor. We also found an unexpected but relatively unimportant main effect of country list, F(1, 49) = 18.11, p < .001, η p 2 = .27, which did not interact with the knowledge condition, F(1, 49) = 1.34, p = .25, η p 2 = .03.Footnote 2 The critical finding was a predicted Knowledge Condition × Anchor interaction, F(1, 49) = 55.11, p < .001, η p 2 = .53. As is shown in Fig. 1, participants in the full-knowledge condition were less influenced by the anchors than were participants in the no-knowledge condition. There was no three-way interaction, F(1, 49) = 1.35, p = .25, η p 2 = .09, indicating that the reduction in bias was not limited to country populations that were studied in the learning phase. We further tested this by conducting separate 2 (Knowledge) × 2 (Anchor) ANOVAs on the SOME values for the old- and new-country estimates. The Knowledge × Anchor interaction was significant for both the old countries, F(1, 50) = 36.82, p < .001, η p 2 = .42, and the new countries, F(1, 50) = 28.51, p < .001, η p 2 = .36. In other words, as expected, full-knowledge participants showed smaller anchoring effects for both the old and new countries. It would appear that the full-knowledge participants were able to generalize the knowledge about the old countries to the new countries, and that allowed them to limit the biasing influence of the anchors.

Fig. 1
figure 1

Signed orders of magnitude of errors for participants’ population estimates following comparisons with high and low anchors in Study 1. The differences between the high- and low-anchor estimates represent the magnitude of anchoring effects. Error bars represent ±1 SE

A possible explanation for the reduced anchoring effects in the full-knowledge condition is that the knowledge manipulation simply informed the participants whether the anchor values were too high or too low. To investigate whether this accounted for the reduction in anchoring effects, we conducted a follow-up analysis only on those estimates that were lower than the high anchor (when that was the anchor that the participant saw) and higher than the low anchor (when that was the anchor that they saw). A 2 (Knowledge) × 2 (Country List) × 2 (Anchor) ANOVA on the remaining 81.86 % of the estimates again revealed a main effect of anchor, F(1, 45) = 82.62, p < .001, η p 2 = .65, the predicted Knowledge × Anchor interaction, F(1, 45) = 31.55, p < .001, η p 2 = .41, and no three-way interaction (F < 1). Separate 2 (Knowledge) × 2 (Anchor) ANOVAs on the old- and new-country estimates revealed the predicted Knowledge × Anchor interaction for both the old, F(1, 49) = 26.07, p < .001, η p 2 = .35, and the new, F(1, 45) = 18.86, p < .001, η p 2 = .30, countries. These analyses revealed that even when focusing only on those estimates for which participants adjusted in the correct direction, the full-knowledge participants were less biased by the anchors than were the no-knowledge participants. Therefore, the impact of the knowledge gained by the full-knowledge participants extended beyond simply knowing which direction to adjust from the anchor.

Measures of metric and mapping knowledge

Participants in the full-knowledge condition exhibited smaller anchoring effects than did the no-knowledge participants, but an important question is what type of knowledge they were endowed with. To answer this question, we evaluated two distinct measures of accuracy, one that primarily gauges metric knowledge and one that primarily gauges mapping knowledge.

Metric knowledge

To investigate participants’ metric knowledge, we computed the order of magnitude of error (OME) for each estimate, such that

$$ \mathrm{O}\mathrm{M}\mathrm{E}=\left|{ \log}_{10}\left(\mathrm{Estimated}\;\mathrm{Value}/\mathrm{Actual}\;\mathrm{Value}\right)\right|. $$

The OME represents error in terms of an order of magnitude of the difference (Nickerson, 1980). Small values represent less error (greater accuracy), and large values represent more error (less accuracy). For example, if the actual value is 10, estimates of 1, 5, 10, 20, and 100 would result in OME values of 1.0, 0.3, 0, 0.3, and 1.0, respectively. Because the OME is the absolute value of error, it does not indicate whether the error represents over- or underestimation. For this study, OME represents how much a participant’s estimate of a country’s population deviated from the correct value. OME is generally considered a measure of participants’ metric knowledge (Brown, 2002).

After calculating the OME for each estimate, the six OME values for estimates made about the old countries were averaged together, as were the six OME values for the new-country estimates. This left each participant with two measures of metric knowledge, one for the old countries and one for the new countries. A 2 (Knowledge) × 2 (Country List: old vs. new) ANOVA on participants’ average OME values revealed two main effects and an interaction (see Fig. 2). The main effect of knowledge, F(1, 49) = 83.56, p < .001, η p 2 = .63, indicated that participants in the full-knowledge condition provided more accurate responses than did participants in the no-knowledge condition. There was also a main effect of country list, F(1, 49) = 24.23, p < .001, η p 2 = .33. These two main effects were qualified by a significant interaction, F(1, 49) = 21.49, p < .001, η p 2 = .31. As is shown in Fig. 2, the difference between participants in the full-knowledge and no-knowledge conditions was larger when making estimates about countries that were on the list in the learning phase (i.e., the old countries) than when making estimates about countries they had not seen before (i.e., new countries). Simple-effect tests revealed that the participants in the full-knowledge condition provided more accurate responses than did the no-knowledge participants for both the old countries, F(1, 49) = 106.31, p < .001, η p 2 = .69, and the new countries, F(1, 49) = 26.55, p < .001, η p 2 = .35.

Fig. 2
figure 2

Orders of magnitude of errors for participants’ country population estimates in Study 1. Higher values indicate greater error (i.e., less accurate estimates). Error bars represent ±1 SE

In short, it appears that metric knowledge—as assessed by OME; see Brown (2002)—was enhanced by the knowledge manipulation and that metric knowledge gained about the old countries was useful when making estimates about the new countries. Given that anchoring effects followed a similar pattern (reduced in the full-knowledge condition, even for new countries), this bodes well for the idea that metric knowledge was important for the reduction in anchoring.

Mapping knowledge

Whereas OME/mean-level accuracy is a measure of metric knowledge, correlational accuracy is a measure of a person’s mapping knowledge (Brown, 2002; Brown & Siegler, 1993). To evaluate correlational accuracy, we calculated within-subjects rank-order correlations between the participants’ estimates and the actual country populations, separately for the old and new countries. We then computed Fisher transformations for each correlation coefficient. A 2 (Knowledge) × 2 (Country List) ANOVA on participants’ transformed correlations revealed two main effects and an interaction (for ease of interpretation, Fig. 3 presents the Spearman correlation coefficients rather than the transformed values). Participants in the full-knowledge condition showed better correlational accuracy than did participants in the no-knowledge condition, F(1, 49) = 11.69, p = .001, η p 2 = .19, and estimates made about the old countries were more accurate than those made about the new countries, F(1, 49) = 16.54, p < .001, η p 2 = .25. We also found a significant interaction, indicating that the difference between the knowledge conditions varied depending on whether the participants were estimating the populations of old or new countries, F(1, 49) = 9.72, p = .003, η p 2 = .17. A simple-effect test revealed that, for estimates about old countries, full-knowledge participants showed better correlational accuracy than did no-knowledge participants, F(1, 49) = 12.61, p = .001, η p 2 = .21. Critically, however, the same was not true regarding the new countries, F(1, 49) = 1.51, p = .23, η p 2 = .03. That is, the mapping knowledge gained in the learning phase by the full-knowledge participants did not increase their correlational accuracy when they encountered new countries. This finding is consistent with previous research that has shown that metric knowledge generalizes from old to new items, but mapping knowledge does not (e.g., Brown & Siegler, 1993, 1996; LaVoie et al., 2002). The important point for the present purposes is that because mapping knowledge—unlike metric knowledge—did not generalize to new countries, it seems unlikely that mapping knowledge played a role in mitigating the anchoring effects for the new countries.

Fig. 3
figure 3

Rank-order correlations between participants’ estimates and the actual population values in Study 1. Higher values represent greater accuracy. Error bars represent ±1 SE

Measures of knowledge and anchoring effects

The analyses above revealed how OME, correlational accuracy, and anchoring effects differed across the two knowledge conditions. We also examined the relationship between participants’ anchoring effects and the two measures of knowledge. For each participant, we first calculated the difference between their SOME values after high and after low anchors, separately for the old and new countries. These values served as a measure of participants’ anchoring effects, with higher values indicating larger anchoring effects. Next, we conducted two regression analyses predicting participants’ anchoring effects for the new and old countries from their OME and correlational accuracy measures for the old and new countries. For the old countries, participants’ OMEs significantly predicted their anchoring effects, β = .800, t(48) = 7.87, p < .001. Participants’ correlational accuracy, however, was not a significant predictor of their anchoring effects, β = –.057, t(48) = –0.56, p = .58. Similarly, for the new countries, participants’ OMEs predicted their anchoring effects, β = .381, t(48) = 2.67, p = .01, but correlational accuracy did not, β = –.08, t(48) = –0.56, p = .58.

Subjective knowledge judgments

As one would expect, participants in the full-knowledge condition (M = 3.00, SD = 1.34) reported higher levels of knowledge about country populations than did participants in the no-knowledge condition (M = 1.55, SD = 0.67), t(49) = 4.67, p < .001, d = 1.38. Participants in the full-knowledge condition (M = 1.97, SD = 1.45) reported lower levels of knowledge about capital cities than did participants in the no-knowledge condition (M = 3.18, SD = 1.40), t(49) = 3.01, p = .004, d = 0.85. An examination of the relationship between participants’ subjective knowledge judgments and their anchoring effects revealed significant negative correlations for both the old, r(49) = –.37, p = .007, and the new, r(49) = –.36, p = .01, countries.

Discussion

Study 1 clearly demonstrated that anchoring effects are moderated by knowledge level. Participants who learned a list of country populations showed smaller anchoring effects than did participants who did not learn the populations. Importantly, the full-knowledge participants demonstrated decreased anchoring effects for both countries they had previously been exposed to and countries they had not seen. In fact, the sizes of the anchoring effects were roughly the same for new and old countries. It appears that participants generalized some of the information they learned to new countries, and this knowledge helped to combat the biasing influence of anchors. Also, analyses focusing only on those estimates that were above the low anchor and below the high anchor still revealed decreased anchoring effects for the full-knowledge participants. It appears that the benefits of knowledge extended beyond simply knowing which direction to adjust from the anchor values.

What information generalized and helped combat anchoring effects? The results from the analyses of OME and correlational accuracy are crucial for this question: Only accuracy as measured by OME, and not as measured by a correlation, showed improved performance on new items (when comparing full- to no-knowledge participants). This suggests that metric knowledge, but not mapping knowledge, was what generalized and helped combat anchoring effects.

Study 2

Study 1 demonstrated that knowledge level moderates anchoring effects and provides initial evidence that this relationship depends on the type of knowledge that one has. However, participants in the full-knowledge condition were given both metric and mapping knowledge, so Study 1 was not a direct test of whether increasing metric knowledge—independent of mapping knowledge—will successfully reduce anchoring effects. Study 2 provided the direct test. Study 2 was similar to Study 1 in terms of topic area and methodology, but the primary difference was that, in addition to the full- and no-knowledge conditions used in Study 1, we created two more knowledge conditions. Participants in a distribution condition learned information about the distribution of the populations of African countries (providing them with metric knowledge), whereas participants in a rank-order condition received information about how the countries compare with one another (providing them with mapping knowledge). The critical question in Study 2 was whether the new knowledge conditions would show reduced anchoring effects. We expected that the condition that provided metric information (the distribution condition) would show smaller anchoring effects than would the condition that provided mapping information (the rank-order condition).

Method

Participants and design

A total of 106 students in an introductory psychology course participated as partial fulfillment of a research requirement. This study was based on a 4 (Knowledge Condition: full, distribution, rank-order, and no knowledge)Footnote 3 × 2 (Anchor: high vs. low) between-subjects design.

Materials and procedures

Overall, the materials and procedures were similar to those used in Study 1, with the differences noted below.

During the learning phase, the participants were shown a list of the names of 16 African countries, along with additional information that varied as a function of the knowledge condition. The full- and no-knowledge conditions were the same as in Study 1 (i.e., participants in the full-knowledge condition saw the country names and populations, whereas those in the no-knowledge condition saw the names and capital cities). The participants in the distribution condition were shown two lists, one with country names, and the other with the country populations. The country names were displayed in a random order, so the participants did not know what population value went with what country. The participants were, however, able to discern the range and distribution of African country populations. This provided them with metric information but not mapping information. The rank-order condition was shown the list of 16 countries ordered from most to least populated, but with no population values provided. This provided the participants with mapping information (i.e., how the countries compare with one another), but not metric information.

After the learning phase, participants provided subjective judgments of their knowledge about African countries. In addition to the question about their general knowledge level that was used in Study 1, the participants were asked questions designed to assess their mapping and metric knowledge. Specifically, they were asked how knowledgeable they were about “how African countries compare to one another in terms of their populations (for example, knowing which countries are relatively large and which are relatively small)?” and “the specific population values that African countries tend to be?” The participants also indicated their knowledge level of the capital cities of African countries.

During the testing phase, the participants made population estimates about six countries. The six countries were “old,” in the sense that they were from the set of 16 on the study list, but it is important to point out that only the full-knowledge participants had learned the specific populations of those countries. Depending on the anchor condition, a participant made his or her six population estimates after seeing either a low or a high anchor. In a change from Study 1, the high anchor was 70 million and the low anchor was 8 million. In Study 1, the high (150 million) and low (2 million) anchors were outside the range of populations presented to the full-knowledge participants during the learning phase. Because of this, it is possible that the only reason that the full-knowledge participants were less influenced by the anchors was that they were able to reject the anchors as that clearly too high or too low. Our use of the less extreme anchors (70/8 million) in Study 2 provided a more conservative test of how knowledge level moderates anchoring effects.

Results and discussion

Anchoring effects

To examine the influence of the different types of knowledge on anchoring effects, we again computed participants’ SOMEs for each estimate and then averaged these values (see Appendix B for the median estimates of each country). A 4 (Knowledge Condition) × 2 (Anchor) ANOVA on participants’ SOME values revealed a significant anchoring effect, F(1, 95) = 49.45, p < .001, η p 2 = .34—participants gave higher estimates after high anchors. This main effect was qualified by a significant Knowledge Condition × Anchor interaction, F(3, 95) = 4.27, p = .007, η p 2 = .12 (see Fig. 4 for an illustration of the pattern). To test the prediction that the conditions designed to improve metric knowledge (the full-knowledge and distribution conditions) would show smaller anchoring effects than those not designed to improve metric knowledge (the rank-order and no-knowledge conditions), we conducted a series of interaction contrasts (Abelson & Prentice, 1997). Participants in the full-knowledge and distribution conditions were similarly influenced by the anchors, F(1, 98) = 0.29, p = .59, η p 2 = .003. Participants in the full-knowledge, F(1, 98) = 4.53, p = .04, η p 2 = .04, and distribution, F(1, 98) = 6.82, p = .01, η p 2 = .07, conditions were less influenced by anchors than were participants in the rank-order condition. And finally, participants in the rank-order and no-knowledge conditions were similarly influenced by anchors, F(1, 98) = 0.98, p = .32, η p 2 = .01. In short, the pattern of results supports our primary expectation that the anchoring effects would be smaller in the conditions designed to enhance metric knowledge (the full-knowledge and distribution conditions) than in the conditions designed to enhance mapping knowledge or irrelevant knowledge (the rank-order and no-knowledge conditions).

Fig. 4
figure 4

Signed orders of magnitude of errors for participants’ population estimates following comparisons with high and low anchors in Study 2. The differences between the high- and low-anchor estimates represent the magnitude of anchoring effects. Error bars represent ±1 SE

As in Study 1, we conducted follow-up analyses on only those estimates that were lower than the high anchor (when participants were shown the high anchor) and higher than the low anchor (when participants were shown the low anchor). A 2 (Knowledge Condition) × 2 (Anchor) ANOVA on the remaining 72.33 % of estimates again revealed a main effect of anchor, F(1, 96) = 16.46, p < .001, η p 2 = .15, and the predicted Knowledge Condition × Anchor interaction, F(3, 96) = 4.75, p = .004, η p 2 = .13. Even when focusing only on those estimates that participants adjusted in the correct direction, the full-knowledge and distribution conditions were less influenced by anchors than were the rank-order and no-knowledge conditions. It would appear that the knowledge gained by the full-knowledge and distribution participants extended beyond simply knowing which direction to adjust from the anchor.

Measures of metric and mapping knowledge

In order to know what the participants learned that allowed the full-knowledge and distribution conditions to give less biased estimates than the rank-order and no-knowledge conditions, we examined measures of metric and mapping knowledge.

Metric knowledge

Mean-level accuracy was again evaluated by computing an OME value for each of the participants’ estimates. As a reminder, the OME represents the amount of error in participants’ estimates, such that higher values indicate less accurate responses. A one-way ANOVA on participants’ average OME values revealed that they varied as a function of the knowledge condition, F(3, 105) = 17.23, p < .001, η p 2 = .34 (see Fig. 5). Follow-up contrast tests revealed that participants in the full-knowledge condition had smaller OME values than did participants in the distribution condition, t(102) = 2.13, p = .036. Those in the distribution condition in turn had smaller OME values than did participants in the rank-order and no-knowledge conditions (ts > 3.60, ps < .001). Participants’ OME values in the rank-order and no-knowledge conditions did not differ from one another, t(102) = 0.17, p = .87. This reveals that participants’ metric knowledge (as measured by the OME) was enhanced in the full-knowledge and distribution conditions.

Fig. 5
figure 5

Orders of magnitude of errors for participants’ country population estimates in Study 2. Higher values indicate greater error (i.e., less accurate estimates). Error bars represent ±1 SE

Mapping knowledge

To gauge mapping knowledge, we calculated the rank-order correlation between participants’ population estimates for the six countries and the actual populations of the countries. Next, Fisher transformations were performed on each participant’s correlation coefficient (for ease of interpretation, Fig. 6 presents the Spearman correlations). A one-way ANOVA revealed that correlational accuracy varied as a function of the knowledge condition, F(3, 102) = 5.10, p = .003, η p 2 = .13. Follow-up contrast tests revealed that participants in the rank-order and full-knowledge conditions did not vary in terms of their correlational accuracies, t(102) = 1.28, p = .20. Participants in the rank-order and full-knowledge conditions exhibited greater correlational accuracies than the other two conditions (ps < .08). The correlational accuracies of participants in the distribution and no-knowledge conditions also did not differ (p = .44). These analyses reveal that participants’ mapping knowledge, as measured by correlational accuracy, was enhanced in the rank-order and full-knowledge conditions. Mapping knowledge was unaffected—relative to the no-knowledge condition—in the distribution condition.

Fig. 6
figure 6

Rank-order correlations between participants’ estimates and the actual population values in Study 2. Higher values represent greater accuracy. Error bars represent ±1 SE

Taken together, these two measures reveal the expected pattern. Participants’ mapping and metric knowledge were increased in the full-knowledge condition; participants’ metric knowledge (but not their mapping knowledge) was increased in the distribution condition; and participants’ mapping knowledge (but not their metric knowledge) was increased in the rank-order condition.

Measures of knowledge and anchoring effects

To address how the measures of accuracy were related to anchoring effects, we conducted a regression analysis predicting participants’ SOME averages from their anchor condition (low or high), OME, correlational accuracy, and the two two-way interaction terms. This analysis revealed main effects of participants’ anchor condition, β = .52, t(100) = 8.00, p < .001, and OME, β = –.29, t(100) = 3.88, p < .001. The two interaction terms also significantly predicted participants’ SOME values. First, the interaction between anchor condition and participants’ OME was significant, β = .34, t(100) = 4.60, p < .001: Participants who had lower OME values were less influenced by anchors. Second, we found an interaction between anchor condition and participants’ correlational accuracy, β = .17, t(100) = 2.38, p = .02: As correlational accuracy increased, participants’ estimates were more influenced by anchors, rather than less influenced. The direction of this relationship, which may seem surprising, likely reflects the different impacts of the information studied in the distribution and rank-order conditions. The information in the distribution condition kept the anchoring effects small and left people without much correlational accuracy. The information in the rank-order condition allowed for high anchoring effects and good correlation accuracy. Taken together, the two interactions reported above clearly indicate that greater metric knowledge—and not greater mapping knowledge—is associated with smaller anchoring effects.

Subjective knowledge judgments

Recall that the participants made self-assessments about their general knowledge, mapping knowledge, metric knowledge, and capital-city knowledge. Responses for the first three were correlated (rs ranged from .37 to .59, ps < .001). Separate one-way ANOVAs per question revealed sensible patterns (see Fig. 7). Here we briefly report the key comparisons. For the general-knowledge question, ANOVA contrasts revealed higher self-assessments in the full-knowledge than in the distribution and no-knowledge conditions (ps < .05), and marginally higher estimates in the rank-order condition than the no-knowledge condition (p = .06). For the mapping question, self-assessments were higher in the full-knowledge than in the rank-order condition (p = .02), and in the rank-order than in the distribution and no-knowledge conditions (ps < .01). For the metric question, self-assessments in the full-knowledge condition were the highest (ps < .05); the assessments in the other three conditions did not differ significantly from one another (ps > .15). For the capital-cities question, self-assessments were highest in the no-knowledge condition (p < .001)—a result likely due to the fact that the no-knowledge condition were the only ones to study capital cities during the learning phase.

Fig. 7
figure 7

Means of participants’ subjective knowledge judgments split by knowledge condition and knowledge judgment in Study 2. Responses were made on a 1 (not at all knowledgeable) to 7 (extremely knowledgeable) scale. Error bars represent ±1 SE

Finally, we examined whether participants’ subjective knowledge ratings were related to their anchoring effects. We computed a regression analysis using participants’ anchor condition (high vs. low), their subjective knowledge judgments (general, mapping, and metric), and the three interaction terms to predict their average SOME values for their six estimates. Again, anchor condition predicted the participants’ estimates, β = .57, t(98) = 6.95, p < .001. More importantly, the only other significant effect was the Metric × Anchor Condition interaction, β = –.19, t(98) = 2.05, p = .04: Participants who reported high levels of metric knowledge were less influenced by anchors than were participants who reported low levels of metric knowledge. The Mapping × Anchor Condition (p = .88) and General × Anchoring Condition (p = .45) interactions did not approach significance. These results are consistent with the larger pattern of results from objective knowledge measures and suggest that self-assessments targeted toward metric knowledge rather than mapping or general knowledge will be most successful in predicting susceptibility to anchoring effects.

General discussion

These studies reveal that the particular type of knowledge that people have is an important determinant of their susceptibility to anchoring effects. In Study 1, participants who studied a list of country populations—that is, the full-knowledge participants—were less influenced by anchors than were participants who learned irrelevant information. Importantly, the full-knowledge participants showed smaller anchoring effects when making estimates about countries they had previously studied and about new countries they had not previously seen. With the new countries, the full-knowledge participants demonstrated increased metric knowledge, but not increased mapping knowledge—implicating metric knowledge as an important determinant of susceptibility to anchoring effects.

Study 2 more directly tested the importance of metric knowledge in reducing anchoring effects. Participants learned information specifically designed to influence their metric or mapping knowledge (or both). As predicted, participants in conditions that increased metric knowledge exhibited reduced anchoring effects. Those in the condition that only increased mapping knowledge showed anchoring effects similar to those in the no-knowledge condition.

In both studies, the benefits of increased metric knowledge extended beyond simply knowing whether the anchors were too high or too low. When focusing our analyses only on those participants who adjusted in the correct direction from high and low anchors, higher metric knowledge was associated with smaller anchoring effects. In addition to knowing in which direction to adjust, people with high metric knowledge also had a better sense of the distribution of African countries. Therefore, they were better able to overcome the biasing influence of anchors.

Practical implications for debiasing anchoring effects

Anchoring effects have been observed in numerous real-world settings, including legal experts’ sentencing decisions (Englich et al., 2006), doctors’ diagnoses (Brewer et al., 2007), and personal-injury damages awards (Chapman & Bornstein, 1996). Therefore, finding ways of reducing anchoring effects could have numerous practical implications. Unfortunately, anchoring effects tend to be quite resistant to debiasing manipulations such as forewarning people about their biasing influence (e.g., Epley & Gilovich, 2005; Wilson et al., 1996). Our studies, however, demonstrated that manipulations of metric knowledge can reduce participants’ anchoring effects.

A situation in which anchoring effects are potentially costly is in personal-injury damages awards. In general, the more money a plaintiff requests as compensation for their pain and suffering, the more money they are awarded by jurors (Chapman & Bornstein, 1996; Hinsz & Indahl, 1995; Malouff & Schutte, 1989; Marti & Wissler, 2000). This occurs even when controlling for the severity of the injury, resulting in high variability in awards for similar cases (Saks, Hollinger, Wissler, Evans, & Hart, 1997). Our studies suggest that a simple and effective intervention to reduce anchoring effects would be to give jurors brief descriptions of several cases, including the amount of money awarded to the plaintiff in each case (analogous to the full-knowledge conditions in our studies; see Saks et al., 1997, for a similar study). In fact, this intervention could perhaps be simplified by only giving jurors the amount of money awarded to each plaintiff, without any description of the details of the case (similar to the distribution condition in our Study 2). Presumably, this would be enough to increase the jurors’ metric knowledge about usual award amounts. They should, therefore, be less influenced by the amount of money requested by the plaintiff. Ideally, this would increase the correspondence between the severity of the injury and the amount awarded.

Final thoughts

Although it might seem reasonable to assume that more knowledgeable people should be less biased, the present studies illustrate that this is not always the case. The relationship between knowledge and anchoring effects is complex, because not all types of knowledge are equally effective at reducing the biasing influence of anchors. Knowing this, researchers can now make better predictions about the moderating role of knowledge in anchoring studies. Additionally, these findings can guide practitioners in developing debiasing techniques that may effectively reduce the biasing influence of anchors. In conclusion, increased knowledge is important, but only the right type of knowledge can reduce bias.