Advertisement

Memory & Cognition

, Volume 39, Issue 5, pp 851–863 | Cite as

The role of familiarity in binary choice inferences

  • Hidehito Honda
  • Keiga Abe
  • Toshihiko Matsuka
  • Kimihiko Yamagishi
Article

Abstract

In research on the recognition heuristic (Goldstein & Gigerenzer, Psychological Review, 109, 75–90, 2002), knowledge of recognized objects has been categorized as “recognized” or “unrecognized” without regard to the degree of familiarity of the recognized object. In the present article, we propose a new inference model—familiarity-based inference. We hypothesize that when subjective knowledge levels (familiarity) of recognized objects differ, the degree of familiarity of recognized objects will influence inferences. Specifically, people are predicted to infer that the more familiar object in a pair of two objects has a higher criterion value on the to-be-judged dimension. In two experiments, using a binary choice task, we examined inferences about populations in a pair of two cities. Results support predictions of familiarity-based inference. Participants inferred that the more familiar city in a pair was more populous. Statistical modeling showed that individual differences in familiarity-based inference lie in the sensitivity to differences in familiarity. In addition, we found that familiarity-based inference can be generally regarded as an ecologically rational inference. Furthermore, when cue knowledge about the inference criterion was available, participants made inferences based on the cue knowledge about population instead of familiarity. Implications of the role of familiarity in psychological processes are discussed.

Keywords

Familiarity-based inference Recognition heuristic Ecological rationality Fast and frugal heuristic 

In research on judgment and decision making, many researchers have tried to clarify the various heuristics people use in deciding and judging. For example, according to the research project, Heuristics and Biases Program, people often rely on various heuristics (e.g., availability, representativeness, anchoring, and adjustment) to solve problems, and this reliance can lead to biased judgments and decisions (see Gilovich, Griffin, & Kahneman 2002; Kahneman, Slovic, & Tversky 1982; Kahneman & Tversky, 2000). These studies, which have focused on drawbacks of the heuristics, have identified a number of conditions under which heuristics produce biases. By contrast, other research has shown an adaptive function of certain heuristics; notable among these is the recognition heuristic (Goldstein & Gigerenzer, 2002). The recognition heuristic has been proposed as one of the fast and frugal heuristics (see Gigerenzer, Todd, & The ABC Research Group 1999). When applied to a binary choice task, the recognition heuristic is described as follows (Goldstein & Gigerenzer, 2002): “If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion” (p. 76). Imagine, for instance, the following problem: “Which city has a larger population, Tokyo or Chiba?” For this problem, the recognition heuristic predicts that someone who recognizes Tokyo but not Chiba will infer that Tokyo has the larger population.

The underlying mechanism of the recognition heuristic is very simple, since it requires a small amount of subjective information in making inferences. That is, people simply take into account whether an object is recognized or not. Previous studies have shown that people actually use the recognition heuristic in inferences, and that those inferences made on the basis of the heuristic often result in successful outcomes (e.g., Goldstein & Gigerenzer, 2002; Pachur, Bröder, & Marewski 2008; Pachur & Hertwig, 2006; Reimer & Katsikopoulos, 2004; Snook & Cullen, 2006; Volz et al., 2006).

Role of familiarity of recognized objects in inference processes

In a majority of the previous studies on the recognition heuristic, people’s knowledge about objects has been classified as either “recognized” or “unrecognized.” Thus, most research has treated subjective knowledge levels in an all-or-none fashion, without regard for differences in the degree of knowledge about the recognized objects. Yet, knowledge levels for recognized objects can vary greatly. For example, although most Japanese recognize the names of cities such as Tokyo and Amsterdam, their knowledge about these two cities may be quite different: Usually, they know much more about Tokyo than Amsterdam. In such a case, the recognition heuristic cannot be applied. In other words, the recognition heuristic cannot account for how people make inferences when both objects are recognized. However, we believe that in such cases, people use some form of fast and frugal heuristics on the basis of the amount of knowledge they possess. Little research has examined the effect of knowledge levels for recognized objects on inferences.

In the present study, we regard the subjective knowledge levels for the recognized objects employed here as unequal. In particular, we assume that subjective knowledge levels are different in degrees of familiarity. Then, how does familiarity of recognized objects influence inferences? Previous studies have suggested that familiarity plays important roles in various processes. Zajonc (1968) showed that people tend to prefer familiar items to unfamiliar ones, which is known as the mere exposure effect. Although the mere exposure effect is the effect of familiarity on a subjective criterion such as preference judgments, familiarity also influences judgments pertaining to objective criteria, such as likelihood judgments or evaluations. Fox and Levav (2000) reported a familiarity effect in relative likelihood judgments. They showed that participants were biased in judging that more familiar events were more likely to occur than less familiar events. Alter and Oppenheimer (2008) showed that familiarity affects the process of evaluation. They showed that participants evaluate familiar forms of currency (e.g., a standard $1 bill) to have greater purchasing power than unfamiliar currency (e.g., a rare $1 coin). These findings on familiarity effects observed in different tasks indicate that familiarity plays an important role in a wide range of psychological processes. On the basis of these findings, we propose a new model of inference processes that we call “familiarity-based inference.” We hypothesize that familiarity for recognized objects plays the role of a proximal cue in inference processes. We predict that when people are presented with two cities and are asked, “Which city has a larger population?”, they will infer that the more familiar city is more populous.

Previous studies (Dougherty, Franco-Watkins, & Thomas 2008; Pleskac, 2007) indicated that familiarity of recognized objects plays an important role in inference processes. Pleskac analyzed the recognition heuristic using signal detection theory, and Dougherty et al. constructed a computational model of familiarity-based inference and analyzed the difference between the recognition heuristic and familiarity-based inference. These studies showed the important role of familiarity in inference processes. However, the studies of Pleskac and Dougherty et al. are limited in that they showed the role of familiarity only from a theoretical perspective. They did not clarify how people actually make inferences using the familiarity of objects.

In their empirical studies, Pohl (2006) and Hilbig, Pohl, and Bröder (2009) have suggested that familiarity of recognized objects does influence inferences. Using a binary choice task requiring a population inference, they found that in pairs of cities in which only one of the two cities was recognized,1 participants tended to infer that the recognized city had the larger population size. A more relevant and interesting finding is that this tendency was more pronounced when participants had additional knowledge about a recognized city than when they merely recognized a city (i.e., knew only the city name). These findings suggest that familiarity of recognized cities influences inferences about population. However, the data of Pohl and Hilbig et al. have not resolved two issues. First, in their studies, the classification of familiarity level was categorical—“mere recognition” versus “recognition plus additional knowledge.” That is, they did not distinguish among levels of additional knowledge. Second, both studies examined effects of familiarity only in recognized/unrecognized pairs, in which the recognition heuristic could be applied. It remains unclear whether or not familiarity affects inferences about recognized/recognized pairs.

In the present research, we report the results of two experiments using the binary choice task of population inference. Our aim was to examine in detail the effects of familiarity of recognized objects on inferences. In the first experiment, we extended the research of Pohl (2006) and Hilbig et al. (2009) to examine our hypothesis about familiarity-based inference. In the second experiment, we used a scale for measuring familiarity that was finer grained than the one used in the first experiment. We then examined the issues such as individual differences and ecological rationality of familiarity-based inferences.

Experiment 1

Method

Participants

Thirty-three undergraduates (all women) from Japan Women’s University participated to fulfill a course requirement.

Tasks and materials

We conducted two tasks: a binary choice task of population inference, and a measurement of familiarity. In the binary choice task, participants were presented with two cities and were asked to choose the city with the larger population. In the measurement of familiarity, participants were asked whether they knew each of the cities that were used in the binary choice task, and, if they knew the city, they were also asked how much they knew about the city.

It has been argued that objects that participants know about from actual experience should be used as stimuli in the research on the recognition heuristic because the recognition heuristic was proposed as a inference strategy for natural environment (e.g., Pachur et al., 2008; Pachur & Hertwig, 2006). Although object recognition can be experimentally induced (e.g., Bröder & Eichler, 2006; Newell & Shanks, 2004), Pachur et al. (2008) demonstrated the possibility that experimentally induced recognition is not valid for assessing the recognition heuristic. In order to ensure a valid experimental setting for exploring inference processes, we constructed two lists (i.e., stimulus sets): A and B (see Table 1).2 The two lists differed in structure. In constructing List B, we chose the most populous cities from each of the 47 prefectures3 in Japan; then, we selected the top 15 cities from these 47 cities. List A was constructed in the same way as List B except that we first chose the second-most populous city from each of the 32 prefectures whose cities were not included in List B.4 To evaluate how recognizable these 30 cities are, 25 undergraduates were asked whether they knew each of the 30 cities. The mean numbers of recognized cities were 8.16 (SD  =  3.18) for List A and 13.40 (SD  =  2.18) for List B. From these results, we predicted that List A would include some recognized/unrecognized pairs in which the recognition heuristic could be applied. By contrast, List B would contain mainly recognized/recognized pairs in which the recognition heuristic could not be applied.
Table 1

Two lists used in Experiments 1 and 2

List A

List B

City Name

Criterion Value (Population)

City Name

Criterion Value (Population)

Kawaguchi-shi

479,486

Yokohama-shi

3,544,104

Machida-shi

405,142

Osaka-shi

2,506,456

Kohriyama-shi

334,756

Nagoya-shi

2,145,208

Takasaki-shi

317,686

Sapporo-shi

1,869,180

Tsu-shi

283,167

Kobe-shi

1,498,805

Sasebo-shi

260,348

Kyoto-shi

1,392,746

Hachinohe-shi

248,776

Fukuoka-shi

1,352,221

Matsumoto-shi

223,472

Hiroshima-shi

1,141,304

Hitachi-shi

201,607

Sendai-shi

998,402

Yamaguchi-shi

187,539

Chiba-shi

905,199

Takaoka-shi

182,408

Niigata-shi

804,873

Imabari-shi

176,966

Hamamatsu-shi

786,776

Miyakonojo-shi

174,473

Kumamoto-shi

662,599

Ogaki-shi

159,661

Okayama-shi

659,561

Ashikaga-shi

159,040

Kagoshima-shi

601,675

Data provided by Japan Geographic Data Center in 2006.

Procedure

Participants were tested individually on both tasks, using a computer. They responded using a mouse in both tasks. We conducted the binary choice task, followed by the measurement of familiarity. We fixed the order of task, because Goldstein and Gigerenzer (2002) and Pachur and Hertwig (2006) reported that the order of such tasks (i.e., inference task, recognition test) did not have a significant effect on participants’ responses.5

In the binary choice task, participants were presented with two city names on a computer screen. They selected one of the two cities in order to indicate their inferences. The response was recorded when participants pressed the decision button on the screen. Until they pressed the decision button, they could change their response. In choosing a city, participants were encouraged to respond as quickly and accurately as possible. After the decision button was pressed, the presented city names disappeared. Then, pressing the “next” button initiated the next trial (i.e., next city pair). This procedure was repeated for all city pairs. Half of the participants performed the binary choice task for pairs from List A first, followed by that from List B, with a 10-min break in-between. The remaining participants received the opposite order. Each participant was presented with all combinations (i.e., 105 pairs) for both lists.

In the measurement of familiarity, participants were presented with a single city name on the computer screen on each trial. When participants did not know the city, they pressed a button labeled “not recognized” on the screen. When they knew the city, they reported the degree of familiarity using an ordinal rating scale reflected by horizontally arranged four buttons. This scale was labeled “just know the name” on the far-left end and “know much about the city” on the far-right end. When a participant completed the familiarity judgment on a given trial, they pressed the decision button on the screen to terminate the current trial. Then, pressing the “next” button initiated the next trial. This procedure was repeated for the 30 cities. The familiarity was recorded using four points (1  =  just know the name to 4  =  know much about the city). Together with a 0 score for “not recognized,” this yielded a 5-point (0–4) rating scale of familiarity.

Results and discussion

We operationally defined familiarity for a city according to participants’ responses to the measurement of familiarity. In some cases, both cities in a pair were not recognized. We assumed that the responses to these pairs were made by random guessing, and we thus excluded them from analyses.6

Recognized cities in the two lists and distributions of familiarity ratings

The mean number of recognized cities in List A was 10.27 (SD  =  2.45). This means that List A included some recognized/unrecognized pairs. In contrast, for List B, 29 of the 33 participants recognized all 15 cities, and the remaining participants recognized 14 cities. As was expected, this indicates that few recognized/unrecognized pairs were created from List B. Figure 1 shows the distributions of ratings for Lists A and B. The number of data points in this figure is 3,465 (33 participants by 105 pairs) for each of the lists, and proportions of the five ratings were calculated.
Fig. 1

Distributions of familiarity ratings for Lists A and B in Experiments 1 and 2

Choice patterns: aggregated data analysis

First, we analyzed choice patterns with aggregated data levels. In this analysis, we examined the relationship between choice patterns and differences (in the form of a ratio) in familiarity between the two cities in a pair. For all 105 pairs, we calculated rates of the larger city choice and an index, X F , of the difference in familiarity between the two cities in a pair. X F was calculated as follows:
$$ {X_F} = \frac{{{F_{{ML}}}}}{{{F_{{ML}}} + {F_{{MS}}}}}, $$
(1)
where F ML and F MS represent mean familiarity of larger and smaller cities in a pair, respectively. The range of X F is, in principle, 0  ≤  X F   ≤  1, depending on the differences in familiarities. For a pair in which participants were more familiar with the smaller city, X F took a value less than 0.5. In contrast, for a pair in which participants were more familiar with the larger city, X F took a value larger than 0.5. If participants were equally familiar with the larger and smaller cities, X F approached around 0.5. We examined the relationship between rates of the larger city choice and X F for all 105 pairs.
Previous studies (e.g., Bröder & Eichler, 2006; Hilbig et al., 2009; Newell & Fernandez, 2006; Newell & Shanks, 2004; Oppenheimer, 2003; Pachur et al., 2008; Pohl, 2006; Richter & Späth, 2006) have shown that cue knowledge for inferences about recognized cities affects inferences. For example, when people recognize a city, they may have knowledge that the city has a major-league soccer team and use this knowledge as a cue for inferring population. Therefore, we also examined whether cue knowledge about cities explained choice patterns. Although we are not certain what sorts of cues were actually used by the participants, we assume that cue knowledge that might have been used for the inference task was highly associated with the actual populations. For large cities, people would retrieve many supporting cues indicating that cities are “populous.” On the other hand, for small cities, the likelihood of retrieving supporting cues that the cities are “populous” would be low. On the basis of these considerations, we defined an index, X K , of the difference (in the form of a ratio) in cue knowledge between the two cities in a pair. X K was calculated as follows:
$$ {X_K} = \frac{{{P_L}}}{{{P_L} + {P_S}}}, $$
(2)
where P L and P S represent the actual populations of the larger and smaller cities in a pair, respectively. In principle, the range of X K is 0.5  <  X K   <  1. Thus, when the difference in populations becomes small, X K approaches 0.5, and when the difference in populations becomes large, X K approaches 1.
Table 2 shows coefficients of correlation among these three variables. Choice rates were more highly correlated with the difference in familiarity (X F ) than with the difference in actual population (X K ) in List A. This implies that participants were more likely to use familiarity than cue knowledge in making inferences during the task. Specifically, participants tended to choose the more familiar city as the more populous city in a pair. This choice pattern is in accord with familiarity-based inference. In order to examine whether the observed familiarity effect was not a spurious effect (i.e., a familiarity effect caused by cue knowledge), we calculated a partial correlation coefficient between X F and choice rates while controlling for X K . The coefficient of partial correlation was 0.742, suggesting that even when the effect of cue knowledge on inference was controlled for, the difference in familiarity still accounted for choice patterns.
Table 2

Coefficients of correlation among three variables in Experiments 1 and 2

List A

List B

 

X K

CR

 

X K

CR

Experiment 1

X F

0.197*

0.748**

X F

0.510**

0.551**

X K

-

0.374**

X K

-

0.633**

Experiment 2

X F

0.355**

0.761**

X F

0.563**

0.632**

X K

-

0.371**

X K

-

0.626**

CR denotes choice rate of the larger city.

* p < .05; ** p < .001

The results of List B showed a different picture. Like List A, choice patterns in List B varied, depending on the difference in familiarity between the two cities of a pair. However, there was a relatively strong correlation between X F and X K . The coefficient of partial correlation between X F and choice rates while controlling for X K was 0.343, suggesting that the effect of familiarity in List B was smaller than that in List A. The coefficient of correlation between X K and choice rates was 0.633, indicating that participants were more likely to make inferences on the basis of cue knowledge than on the difference in familiarity. This interpretation is quite reasonable, given the characteristics of the cities in the two lists. Because List B consisted of well-known cities, it is quite likely that participants indeed retrieved and used cue knowledge about the populations of these cities. Previous studies have shown that when participants can easily access cue knowledge about recognized objects, they use it in making inferences (e.g., Bröder & Eichler, 2006; Hilbig et al., 2009; Newell & Fernandez, 2006; Newell & Shanks, 2004; Oppenheimer, 2003; Pachur et al., 2008; Pohl, 2006; Richter & Späth, 2006). Therefore, the results for List B are consistent with those in previous studies. These findings also suggest that participants did not use a single inference strategy, but adopted different strategies, depending on the situation. Some researchers have argued that people do not always use the recognition heuristic. Pachur and Hertwig (2006) and Pachur, Todd, Gigerenzer, Schooler, and Goldstein (in press) have claimed that the recognition heuristic is used to make inferences under uncertainty—that is, when cue knowledge about the criterion is not available. Because inference based on familiarity is a kind of heuristic, our findings provide empirical evidence for this claim.

Individual differences in familiarity-based inference

Next, we examined individual differences in familiarity-based inference. For each participant, we computed the proportion of actual inferences that were consistent with familiarity-based inference in List A.7

Figure 2 shows proportions of accordance to familiarity-based inference for the 33 participants. The mean proportion of accordance was 0.791 (SD  =  0.105), indicating that most of the participants’ inferences were in accord with familiarity-based inference. As Fig. 2 shows, however, there were substantial individual differences in the proportions of inferences that corresponded to familiarity-based inference. The proportions ranged between 0.481 and 0.968.
Fig. 2

Proportion of accordance to familiarity-based inference for 33 participants in Experiment 1

Experiment 2

The results of Experiment 1 showed that familiarity plays an important role in inferences. Choice patterns were generally explained by differences in familiarity of cities in a pair. Specifically, participants tended to infer that the more familiar city of a pair was the more populous one. However, we did not clarify the two issues that follow.

First, we did not identify the factor that produces individual differences in familiarity-based inference. We showed that there were substantial individual differences in accordance with familiarity-based inference. However, we have not identified why this difference was found. Second, we have not discussed the adaptive function of familiarity-based inference. Familiarity-based inference can be assumed to be a kind of heuristic strategy. In discussing heuristics, one of the most interesting questions is whether or not a heuristic has an adaptive function. Thus, we asked whether familiarity-based inference has an adaptive function, or whether it can result in irrational inferences.

In Experiment 2, we used a scale for measuring familiarity that was finer grained than that in Experiment 1 in order to clarify these unexplored issues. We examined the first issue using statistical models. For the second issue, we investigated whether familiarity is a good cue for inferences about populations and discussed the adaptive function of familiarity-based inference.

Method

Participants

Eighty-one undergraduates (all women) from Japan Women’s University participated in this experiment to fulfill a course requirement. They did not participate in Experiment 1.

Tasks, materials, and procedure

Tasks, materials, and procedure were the same as those of Experiment 1, with the exception of the method of measuring familiarity. In Experiment 2, familiarity of recognized cities was measured by presenting participants with a different scale on a computer screen. This scale consisted of a line that was labeled “just know the name” on the far left and “know much about the city” on the far right. Using a mouse, participants clicked the place on the scale that they felt best represented their degree of familiarity. The participants’ responses were recorded over a 100-point range (from 1 = just know the name to 100 = know much about the city).

Results and discussion

We operationally defined familiarity of a city according to participants’ responses to the measurement of familiarity. As in Experiment 1, when participants could not recognize a city, familiarity was defined as 0.

Recognized cities in the two lists and distributions of familiarity ratings

The mean number of recognized cities was 10.17 for List A (SD  =  2.32). For List B, 75 of the 81 participants recognized 15 cities, five participants recognized 14 cities, and one participant recognized 12 cities. As in Experiment 1, List A included some recognized/unrecognized pairs. In contrast, for List B, all pairs were recognized/recognized pairs for most participants. Figure 1 shows the distribution of familiarity ratings for Lists A and B. The number of data points in this figure is 8,505 (81 participants by 105 pairs) for each of the two lists. We used five categories of familiarity ratings in order to make categories analogous to those in Experiment 1.8 “Zero” familiarity means that participants did not recognize the city. The categories of familiarity 1, 2, 3, and 4 corresponded to ratings of 1–25, 26–50, 51–75, and 76–100, respectively. Figure 1 shows that the distributions of familiarity ratings for Experiments 1 and 2 were comparable, despite the difference in scale.

Choice patterns: aggregated data analysis

First, we analyzed choice patterns with aggregated data levels using the same method as in Experiment 1. For all 105 pairs, we calculated rates of the larger city choice and the index of differences in familiarity (X F ) between the two cities in a pair. Then, we examined the relationship among the three variables, choice patterns, X F , and X K .

Table 2 shows the coefficients of correlation among the three variables. In both lists, a correlational relationship existed among the three variables, as in Experiment 1. Therefore, we calculated partial correlations between X F and choice rates in order to control for the effect of cue knowledge (i.e., X K ). The coefficients of partial correlation were 0.724 and 0.434 for Lists A and B, respectively. This result suggests that even when the effect of cue knowledge was controlled for, the difference in familiarity still accounted for inferences, and that familiarity influenced inferences in List A more than in List B.

Taken together, the general patterns in the choice data in Experiment 2 are analogous to those found in Experiment 1. Note that the influence of familiarity is more profound for choice behavior in List A, in which cue knowledge about the criterion is limited.

Individual difference in choice patterns

Next, we will analyze choice patterns with individual-level data and will then discuss individual differences. In Experiment 1, we discussed individual differences based on the accordance rate of choice patterns for familiarity-based inference. However, we did not identify the source of individual differences. Here, we adopted a model-based approach using multilevel logistic regression analysis (Gelman & Hill, 2007).9

We evaluate three models that may represent the choice rates of the larger city.

The first model is the familiarity-based inference (FI) model. FI is given as:
$$ \log \frac{{{P_{{CL}}}}}{{1 - {P_{{CL}}}}} = a{X_{{FI}}} + b $$
(3)
$$ {X_{{FI}}} = \frac{{{F_L}}}{{{F_L} + {F_S}}}, $$
(4)
where F L and F S represent familiarity ratings for the larger and smaller cities in a pair, respectively. P CL represents the choice rate for the larger city. a and b denote free parameters for weight and intercept, respectively. X FI is basically the same as X F of Eq. 1, except that individual data were used for X FI .
The second model is the knowledge-based inference (KI) model. KI is given as:
$$ \log \frac{{{P_{{CL}}}}}{{1 - {P_{{CL}}}}} = a{X_K} + b, $$
(5)
where a and b denote free parameters for weight and intercept, respectively. X K is the same as in Eq. 2.
The third model is the recognition heuristic (RH) model:
$$ \log \frac{{{P_{{CL}}}}}{{1 - {P_{{CL}}}}} = a{X_{{{\rm Re} cog}}} + b, $$
(6)
where X Recog is a dummy variable that distinguishes recognized/unrecognized pairs from recognized/recognized ones. X Recog equals 1 when only the larger city is recognized and −1 when only the smaller city is recognized. X Recog equals 0 in recognized/recognized pairs. The recognition heuristic predicts that people infer that the recognized city in a recognized/unrecognized pair has a larger population than the unrecognized city. RH assumes that only recognition of the city affects inferences, as the recognition heuristic predicts. We evaluated the relative predictive power of FI to RH. RH and FI make the same predictions about recognized/unrecognized pairs. (Note that X FI   =  1 or 0 for a recognized/unrecognized pair in FI.) That is, they predict that people choose a recognized city as having a larger population. However, the recognition heuristic does not make any predictions for recognized/recognized pairs. In RH, inference patterns for recognized/recognized pairs are explained by intercepts. In contrast, FI can make predictions of inference patterns in both recognized/recognized and recognized/unrecognized pairs by using differences in familiarities among cities.

We assumed that FI, KI, and RH were multilevel models with varying intercepts and slopes for each participant. Therefore, the free parameters in the three models (i.e., a and b) were simultaneously estimated for every participant.10 11 For List A, the three models were regressed on individual data. As was previously mentioned, most participants could recognize all 15 cities in List B. Because regression of RH on only recognized/recognized pairs is meaningless, only FI and KI were regressed on individual data of 81 participants for List B. We assessed goodness of fit of the models by log-likelihood values.

Table 3 shows the result of this analysis. For List A, FI resulted in the best fit among the three models. In List B, KI showed a better fit than FI. These results for Lists A and B were consistent with those of the aggregated analysis. Thus, the analysis based on individual data also suggests that participants changed their inference strategy depending on the problem.
Table 3

Fit of three models of binary choice inference (log-likelihood)

Model

List A

List B

FI

−3,846

−4,214

KI

−4,659

−4,098

RH

−3,969

-

From these results, we explored individual choice patterns. As was mentioned previously, coefficients were estimated for each participant in a multilevel logistic regression analysis. We conducted K-Means clustering using sets of coefficients (i.e., a and b) estimated for each participant and classified response patterns based on the sets of coefficients. We assumed that individual choice patterns in List A were explained by FI and that those in List B were explained by KI. Thus, in the clustering analysis, we used the results of FI for List A and those of KI for List B. In determining the number of clusters in K-Means clustering, we used scree plots for the within-cluster sum of squares for each cluster (denoted WSS; see Fig. 3). We adopted three clusters for both Lists A and B for the following reasons. First, the reduction of WSS was sharp up to three clusters. Second, it can be assumed that the adopted three clusters did not represent rare choice patterns because the number of participants who were categorized into each cluster was equal to or greater than 10.
Fig. 3

Scree plot for within-cluster sum of squares (WSS) in K-Means clustering in Experiment 2

Figure 4 shows the results of the classification of choice patterns. Three regression lines are depicted using mean values of the coefficients (a and b) for each of the three clusters. In List A, inference levels are almost identical for all three clusters when X F equals 0.5 in which participants were equally familiar with the presented cities in a pair (see the left graph in Fig. 4). This result suggests that the difference in choice patterns among the three clusters derived from the difference in the slopes of FI (i.e., free parameter a). Given that the detection of differences in the familiarity of two cities is a subjective psychological process, we speculate that individual differences in familiarity-based inference lie in sensitivity to differences in familiarity. According to this speculation, we can make a prediction about accordance to familiarity-based inference. Our prediction is that the more sensitive the participants are to differences in familiarity, the more likely their inferences will be in accord with familiarity-based inference. Parameter values a suggest that the level of sensitivity to differences in familiarity is ordered by steepness of slope, with (cluster 1) > (cluster 3) > (cluster 2). Therefore, we predicted that the level of accordance rate would be observed in this order for the three clusters. For each participant, we computed the proportion of actual inferences that were consistent with familiarity-based inference. In calculating proportion of accordance, we had to set a threshold of the difference in familiarity that people can discriminate. In this analysis, we set a 1-point difference as the threshold. That is, we assumed that people identify a difference in familiarity between two cities when the difference of familiarity ratings is equal to or greater than 1, and that familiarity-based inference can be applied to these pairs. Figure 5 shows the proportion of accordance to familiarity-based inference for each of the 81 participants who were categorized into one of the three clusters. As the figure shows, mean proportions of the accordance rate differed among the three clusters. Mean proportions of the accordance rate were 0.865 (SD  =  0.046), 0.623 (SD  =  0.092), and 0.764 (SD  =  0.058) for clusters 1, 2, and 3, respectively. We conducted multiple tests on mean accordance rates for all pairs of the three clusters. There were significant differences in accordance with familiarity-based inference for all of the three pairs (see Table 4). Hence, our prediction about individual differences was corroborated.
Fig. 4

Individual differences in estimated slopes and intercepts for Lists A and B in Experiment 2. Regression lines are depicted using mean parameters (a and b) for the three clusters. The vertical axis denotes the logit of choice rate of the larger city (Logit[choice rate])

Fig. 5

Proportion of accordance to familiarity-based inference for 81 participants in Experiment 2. Bar colors indicate the cluster to which each participant belongs

Table 4

Results of multiple tests of accordance rate for familiarity-based inference

Pair

Statistics

Corrected p-values (Bonferroni correction)

Effect size (r 2 )

Cluster1–Cluster2

t(37) = 10.49

p < .0001

0.748

Cluster1–Cluster3

t(60) = 6.79

p < .0001

0.434

Cluster2–Cluster3

t(59) = 7.25

p < .0001

0.471

Unlike those from List A, the estimated slopes were very similar among the three clusters in List B (see the right graph in Fig. 4). Thus, this result suggests that the difference in choice patterns among the three clusters resulted from the intercepts. Provided that X K indicates the difference in amount of cue knowledge for the two cities in a pair, this result implies that individual differences in knowledge-based inference lies in differences in accuracy of cue knowledge.

Taken together, the analysis of individual choice data showed that familiarity-based inference successfully explained choice patterns in List A and that knowledge-based inference explained those in List B. These results are consistent with findings of the aggregated data analysis in Experiments 1 and 2. The analysis of individual differences suggests that individual differences in familiarity-based inference lie in sensitivity to differences in familiarity, and that individual differences in knowledge-based inference derive from differences in accuracy of knowledge.

Does familiarity-based inference lead to ecologically rational inferences?

Finally, we examine another aspect of familiarity-based inference. This concerns the possibility that familiarity serves an adaptive function. In order to examine this issue, we calculated validity and discrimination rate utilizing the method of Gigerenzer and Goldstein (1999). Validity is the criterion of how often an inference cue leads to correct inferences. Using data on familiarity for 105 pairs, familiarity validity (V F ) is calculated for each participant by the following equation:
$$ {V_F} = \frac{{{C_F}}}{{{C_F} + {W_F}}}, $$
(7)
where C F is the number of cases in which familiarity-based inference results in correct inferences if the participant uses familiarity-based inference, and W F is the number of cases in which familiarity-based inference results in wrong inferences if the participant uses familiarity-based inference. Validity alone may be insufficient to evaluate the adaptive function of familiarity-based inference. If familiarity-based inference can be applied in only a few cases, familiarity-based inference is not useful. Thus, we calculated another criterion, the discrimination rate, which represents the proportion of pairs in which the participant is able to apply familiarity-based inference in 105 pairs. In calculating these criteria, we had to set a threshold of difference in familiarity that people can discriminate. We set three levels of threshold: 1, 10, and 25. That is, we assumed that participants identify a difference in familiarity between two cities only if the difference exceeds 1, 10, or 25. Furthermore, we compared validity and the discrimination rate of familiarity-based inference with the recognition heuristic, in order to evaluate the relative adaptive function of familiarity-based inference. Recognition validity (V R ) is calculated as follows:
$$ {V_R} = \frac{{{C_R}}}{{{C_R} + {W_R}}}, $$
(8)
where C R is the number of cases in which inferences based on the recognition heuristic result in correct inferences if a participant uses the recognition heuristic, and W F is the number of cases in which inferences based on the recognition heuristic result in wrong inferences. We also calculated the discrimination rate at which a participant is able to apply the recognition heuristic in 105 pairs.
We calculated these values for Lists A and B for each participant. Figure 6 shows mean validity and discrimination rate for familiarity-based inference for the three threshold levels and the recognition heuristic. Although validity varied among the three threshold levels, familiarity-based inference generally showed high validity (around .70). The validity of the recognition heuristic was greater than that of familiarity-based inference for both Lists A and B. However, there were trade-offs between validity and discrimination rate. As validity increased, discrimination rate decreased. Although the validity of the recognition heuristic was slightly higher than that of familiarity-based inference, the discrimination rate of the recognition heuristic was, on average, notably lower. We may interpret this tendency to suggest that although the recognition heuristic is useful, it can be applied on limited occasions. In contrast, familiarity-based inference is slightly less valid than the recognition heuristic, but it can be applied to a wider range of occasions.
Fig. 6

Validity and discrimination rate of familiarity-based inference and recognition heuristic in Experiment 2. F01, F10, F25, and RH on the horizontal axis denote thresholds 01, 10, 25, and the recognition heuristic

In sum, familiarity-based inference showed validity comparable to the recognition heuristic and high applicability to inferences. Because previous studies have argued for the adaptive nature of the recognition heuristic (e.g., Goldstein & Gigerenzer, 2002), familiarity-based inference can serve as an ecologically rational heuristic (e.g., Gigerenzer et al., 1999).

General discussion

In two experiments, we examined hypotheses about familiarity-based inference. Both aggregated and individual data analyses showed that participants’ inference patterns can be satisfactorily explained by a measure that captures the difference in familiarity associated with cities under comparison. In particular, it was found that in binary choice inference about population size, participants tended to choose the more familiar city as the more populous city. These findings provide new evidence about the inference processes underlying binary choice behavior. The recognition heuristic appears to predict inference patterns only in recognized/unrecognized pairs. In contrast, a model that incorporates differences in object familiarity can reliably predict inference patterns whenever two objects differ in familiarity. This suggests that differences in familiarity can provide a new explanation of the inference processes involved in responding to recognized/recognized pairs; in contrast, the recognition heuristic cannot address this situation.

Familiarity-based inference: representations in statistical models and relationship with the recognition heuristic

In Experiment 2, we proposed a statistical model of familiarity-based inference, FI. In this model, the difference in familiarity of two cities is represented in the form of a ratio. However, the difference in familiarity can be represented in other forms. For example, it can be simply represented in the form of a difference. Therefore familiarity-based inference can be represented as follows:
$$ \log \frac{{{P_{{CL}}}}}{{1 - {P_{{CL}}}}} = a{X_{{Diff}}} + b $$
(9)
$$ {X_{{Diff}}} = {F_L} - {F_S} $$
(10)
The two models represented by Eqs. 3 and 9 predict different inferences for recognized/unrecognized pairs. In Eq. 3, X FI equals 0 or 1 in recognized/unrecognized pairs. In other words, in recognized/unrecognized pairs, the effect of familiarity is assumed to be equivalent and maximum in Eq. 3. For example, the effect of familiarity in recognized (F L   =  20)/unrecognized (F S   =  0) pairs is assumed to be as strong as in recognized (F L   =  40)/unrecognized (F S   =  0) pairs because X FI always equals 1 when participants cannot recognize the smaller city. On the other hand, 0  <  X FI   <  1 in recognized/recognized pairs. That is, Eq. 3 assumes that the effect of familiarity in recognized/unrecognized pairs, in which the recognition heuristic can be applied, differs from that in recognized/recognized pairs, in which the recognition heuristic cannot be applied. This assumption is compatible with the recognition heuristic because the recognition heuristic states that recognition itself influences inferences. However, as was previously mentioned, Pohl (2006) and Hilbig et al. (2009) have suggested that familiarity influences inferences in recognized/unrecognized pairs in which the recognition heuristic can be applied. In Eq. 9, the effect of familiarity in recognized/unrecognized pairs is not assumed to be equivalent. X Diff varies, depending on the familiarity of the recognized city in recognized/unrecognized pairs. For example, the effect of familiarity in recognized (F L   =  40)/unrecognized (F S   =  0) pairs is twice as strong as in recognized (F L   =  20)/unrecognized (F S   =  0) pairs. According to the findings of Pohl (2006) and Hilbig et al., Eq. 9 may be valid as the familiarity-based inference model because familiarity may have influenced inferences in recognized/unrecognized pairs. Furthermore, Eq. 9 assumes that the effect of familiarity in recognized/unrecognized pairs is not always maximum. For example, Eq. 9 assumes that effect of familiarity is the same between recognized (F L   =  20)/unrecognized (F S   =  0) pairs and between recognized (F L   =  40)/recognized (F S   =  20) ones.

We regressed Eq. 9 on individual data from List A, as in the analysis of individual differences in Experiment 2, and compared goodness of fit between the two models of familiarity-based inference. The log likelihood was −4205 for Eq. 9. Thus Eq. 3 indicated a better fit (see Table 3). Although we realize that an examination of psychological validity will be necessary in further research, Eq. 3 can be assumed to be more valid for familiarity-based inference than Eq. 9, in terms of model fitting.

These results are not necessarily consistent with the findings of Pohl (2006) and Hilbig et al. (2009). However, our method of analysis was quite different from theirs. They divided recognized objects into recognition plus additional knowledge (which they denoted as “R+”) or mere recognition (which they denoted as “mR”) and compared inference patterns between recognized(R+)/unrecognized pairs and recognized(mR)/unrecognized pairs. In short, they dichotomized response patterns in terms of the knowledge of the recognized objects; we call this the “dichotomization method.” In contrast, our approach was a model-based approach using multilevel regression analysis, which we call the “regression method.” MacCallum, Zhang, Preacher, and Rucker (2002) pointed out that dichotomization and regression methods may yield different conclusions, and that the regression method is generally more appropriate when relationships among variables are examined. Although further research will be necessary to scrutinize why the gap between our findings and those in the previous studies was produced, we claim that our analysis is better justified in terms of statistical method.

RH, proposed in Experiment 2, can be assumed to be one of the statistical models of the recognition heuristic. As was previously mentioned, FI and RH can be regarded as the same model for recognized/unrecognized pairs. If Eq. 3 has psychological validity for representing familiarity-based inference, familiarity-based inference can be regarded as a generalized model of the recognition heuristic that is applicable to recognized/unrecognized and recognized/recognized pairs.

Difference between familiarity-based inference and the fluency heuristic

We suggest that familiarity-based inference is analogous to the fluency heuristic (Hertwig, Herzog, Schooler, & Reimer 2008; Schooler & Hertwig, 2005). The fluency heuristic was described as follows by Hertwig et al. (2008): “If two objects, a and b, are recognized, and one of two objects is more fluently retrieved, then infer that this object has the higher value with respect to the criterion” (p. 1192).

Although we have not obtained empirical evidence, we predict that the greater the familiarity of an object, the more fluently the object will be retrieved. Thus, familiarity-based inference and the fluency heuristic will predict the same inference in the same situation.

We do not deny the possibility that familiarity-based inference can be explained by the fluency heuristic. Nevertheless, we argue that research on familiarity-based inference is definitely important, at least in the following respects. First, research on familiarity-based inference will clarify the psychological processes of the fluency heuristic. In their discussion, Hertwig et al. (2008) pointed out the possibility that familiarity of objects is involved with processes of the fluency heuristic. Hence, familiarity may play an important role in the fluency heuristic. However, Hertwig et al. did not examine the role of familiarity of objects in inference processes. The findings of the present study will contribute to clarification of the psychological processes of the fluency heuristic. Second, familiarity-based inference may be a new model of binary choice inference that is applicable to a wide range of psychological processes. As we pointed out in the introduction, familiarity influences various psychological processes, such as preference judgment (Zajonc, 1968), evaluation (Alter & Oppenheimer, 2008), and relative likelihood judgment (Fox & Levav, 2000). Accordingly, research on familiarity-based inference will contribute to the clarification of psychological processes other than those involved in binary choice inference.

Future research on familiarity-based inference

Finally, we point out that theoretical research on familiarity-based inference will be necessary in future research. The present study was the first to empirically examine the role of familiarity in inference processes. As was previously mentioned, Pleskac (2007) and Dougherty et al. (2008) evaluated the role of familiarity from a theoretical perspective. However, their research goal did not necessarily lie in clarifying the role of familiarity in inference processes. Thus, the theoretical understanding of the role of familiarity in inference processes remains insufficient. Research from a theoretical perspective will clarify the psychological processes of the role of familiarity in inference, for example, by examining the difference between familiarity-based inference and the fluency heuristic.

Footnotes

  1. 1.

    Hereafter, we use the following terms to refer to recognition of objects. When one of two objects is recognized and the other is not, we call this a “recognized/unrecognized” pair. When both objects are recognized, we call this a “recognized/recognized” pair.

  2. 2.

    In Japanese, there are three characters, Kanji, Hiragana, and Katakana. Because most cities use Kanji in notation, we chose only the cities that use Kanji in notation.

  3. 3.

    There are 47 prefectures in Japan, and the prefecture corresponds to the state in the U.S. We adopted “shi” as the city unit. “Shi” corresponds to a city in the U.S.

  4. 4.

    In constructing Lists A and B on the basis of numbers of population, some cities belong to the same prefectures. For example, Yokohama-shi (first rank in List B) and Kawasaki-shi (first rank in List A) belong to the Kanawagawa-prefecture. In order to avoid using cities from the same prefecture, 15 prefectures, whose cities were used in List B, were deleted in constructing List A. That is, we chose the second-most populous cities from each of 32 cities, then selected the top 15 cities from these 32 cities.

  5. 5.

    In the strict sense, the recognition test that was conducted in Goldstein and Gigerenzer (2002) and Pachur and Hertwig (2006) was different from the measurement of familiarity in the present study. However, in terms of the properties of these two tasks, we assumed that there were no essential differences between the recognition test and our measurement of familiarity.

  6. 6.

    We deleted these data from the analyses of Experiment 2.

  7. 7.

    Because analyses of general choice patterns suggested that cue knowledge for recognized cities influenced inferences in List B, we conducted this analysis only for List A.

  8. 8.

    These five categories were used only for plotting in Fig. 1. The raw scale was used in analyses.

  9. 9.

    In the multilevel model, some assumptions are required. The assumption of independence of errors might be of concern for the present analysis because we used data from repeated measurements (this is called “panel data” in Train [2009]). For example, in the repeated choices, a participant’s past choices might influence his or her current choice (e.g., “because I chose city A in pair city (A–B) and city B in pair city (B–C), I will choose city A in pair city (A–C).”). In this case, the independence of errors might be violated. However, we regard participants’ repeated choices as independent for the following reasons.

    First, if we assume that participants make inferences on the basis of familiarity, repeated choices can be regarded as independent. In familiarity-based inference, participants have only to take into account familiarity of presented cities in the current choice. Thus, past choices do not have any influence on the current choice in familiarity-based inference. Second, since we cannot specify the content of the effect of past choices on following choices, it would be reasonable to assume that each choice situation for a participant is independent (Train, 2009).

  10. 10.

    If all pairs are used for multilevel logistic regression analysis, the number of data points is 8,505 (81 participants by 105 pairs). However, unrecognized/unrecognized pairs were deleted. Hence, the numbers of data points used for multilevel logistic regression analysis were 7,541 and 8,502 for Lists A and B, respectively.

  11. 11.

    Free parameters for each participant consist of two components, fixed and random effects. The fixed effect reflects an estimated average coefficient, and the random effect reflects an estimated error for each participant. Free parameters for each participant are obtained by simply adding the fixed and random effects (Gelman & Hill, 2007).

Notes

Acknowledgment

This work was in part supported by the Japan Society for the Promotion of Science KAKENHI (Grant 20700235) and the Support Center for Advanced Telecommunications Technology Research (SCAT). We thank three anonymous reviewers for their insightful suggestions.

References

  1. Alter, A. L., & Oppenheimer, D. M. (2008). Easy on the mind, easy on the wallet: The roles of familiarity and processing fluency in valuation judgments. Psychonomic Bulletin & Review, 15, 985–990. doi: 10.3758/pbr.15.5.985 CrossRefGoogle Scholar
  2. Bröder, A., & Eichler, A. (2006). The use of recognition information and additional cues in inferences from memory. Acta Psychologica, 121, 275–284. doi: 10.1016/j.actpsy.2005.07.001 PubMedCrossRefGoogle Scholar
  3. Dougherty, M. R., Franco-Watkins, A. M., & Thomas, R. (2008). Psychological plausibility of the theory of probabilistic mental models and the fast and frugal heuristics. Psychological Review, 115, 199–211. doi: 10.1037/0033-295x.115.1.199 PubMedCrossRefGoogle Scholar
  4. Fox, C. R., & Levav, J. (2000). Familiarity bias and belief reversal in relative likelihood judgment. Organizational Behavior and Human Decision Processes, 82, 268–292. doi: 10.1006/obhd.2000.2898 PubMedCrossRefGoogle Scholar
  5. Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. New York: Cambridge University Press.Google Scholar
  6. Gigerenzer, G., & Goldstein, D. G. (1999). Betting on one good reason: The take the best heuristic. In G. Gigerenzer & P. M. Todd & The ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 75–95). New York: Oxford University Press.Google Scholar
  7. Gigerenzer, G., Todd, P., & The ABC Research Group. (1999). Simple heuristics that make us smart. New York: Oxford University Press.Google Scholar
  8. Gilovich, T., Griffin, D. W., & Kahneman, D. (Eds.). (2002). Heuristics and biases. New York: Cambridge University Press.Google Scholar
  9. Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75–90. doi: 10.1037/0033-295x.109.1.75 PubMedCrossRefGoogle Scholar
  10. Hertwig, R., Herzog, S. M., Schooler, L. J., & Reimer, T. (2008). Fluency heuristic: A model of how the mind exploits a by-product of information retrieval. Journal of Experimental Psychology. Learning, Memory, and Cognition, 34, 1191–1206. doi: 10.1037/a0013025 PubMedCrossRefGoogle Scholar
  11. Hilbig, B. E., Pohl, R. F., & Bröder, A. (2009). Criterion knowledge: A moderator of using the recognition heuristic? Journal of Behavioral Decision Making, 22, 510–522. doi: 10.1002/bdm.644 CrossRefGoogle Scholar
  12. Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.Google Scholar
  13. Kahneman, D., & Tversky, A. (Eds.). (2000). Choices, values, and frames. New York: Cambridge University Press.Google Scholar
  14. MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 19–40. doi: 10.1037/1082-989x.7.1.19 PubMedCrossRefGoogle Scholar
  15. Newell, B. R., & Fernandez, D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: Two critical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 333–346. doi: 10.1002/bdm.531 CrossRefGoogle Scholar
  16. Newell, B. R., & Shanks, D. R. (2004). On the role of recognition in decision making. Journal of Experimental Psychology. Learning, Memory, and Cognition, 30, 923–935. doi: 10.1037/0278-7393.30.4.923 PubMedCrossRefGoogle Scholar
  17. Oppenheimer, D. M. (2003). Not so fast! (and not so frugal!): Rethinking the recognition heuristic. Cognition, 90(1), B1–B9. doi: 10.1016/s0010-0277(03)00141-0 PubMedCrossRefGoogle Scholar
  18. Pachur, T., Bröder, A., & Marewski, J. N. (2008). The recognition heuristic in memory-based inference: Is recognition a non-compensatory cue? Journal of Behavioral Decision Making, 21, 183–210. doi: 10.1002/bdm.581 CrossRefGoogle Scholar
  19. Pachur, T., & Hertwig, R. (2006). On the psychology of the recognition heuristic: Retrieval primacy as a key determinant of its use. Journal of Experimental Psychology. Learning, Memory, and Cognition, 32, 983–1002. doi: 10.1037/0278-7393.32.5.983 PubMedCrossRefGoogle Scholar
  20. Pachur, T., Todd, P. M., Gigerenzer, G., Schooler, L. J., & Goldstein, D. G. (in press). Is ignorance an adaptive tool? A review of recognition heuristic research. In P. M. Todd, G. Gigerenzer, & The ABC Research Group (Eds.), Ecological rationality: Intelligence in the world. New York: Oxford University Press.Google Scholar
  21. Pleskac, T. J. (2007). A signal detection analysis of the recognition heuristic. Psychonomic Bulletin & Review, 14, 379–391.CrossRefGoogle Scholar
  22. Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 251–271. doi: 10.1002/bdm.522 CrossRefGoogle Scholar
  23. Reimer, T., & Katsikopoulos, K. V. (2004). The use of recognition in group decision-making. Cognitive Science: A Multidisciplinary Journal, 28, 1009–1029. doi: 10.1016/j.cogsci.2004.06.004 CrossRefGoogle Scholar
  24. Richter, T., & Späth, P. (2006). Recognition is used as one cue among others in judgment and decision making. Journal of Experimental Psychology. Learning, Memory, and Cognition, 32, 150–162. doi: 10.1037/0278-7393.32.1.150 PubMedCrossRefGoogle Scholar
  25. Schooler, L. J., & Hertwig, R. (2005). How forgetting aids heuristic inference. Psychological Review, 112, 610–628. doi: 10.1037/0033-295x.112.3.610 PubMedCrossRefGoogle Scholar
  26. Snook, B., & Cullen, R. M. (2006). Recognizing National Hockey League greatness with an ignorance-based heuristic. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 60, 33–43. doi: 10.1037/cjep2006005 CrossRefGoogle Scholar
  27. Train, K. E. (2009). Discrete choice methods with simulation. New York: Cambridge University Press.Google Scholar
  28. Volz, K. G., Schooler, L. J., Schubotz, R. I., Raab, M., Gigerenzer, G., & von Cramon, D. Y. (2006). Why you think Milan is larger than Modena: Neural correlates of the recognition heuristic. Journal of Cognitive Neuroscience, 18, 1924–1936. doi: 10.1162/jocn.2006.18.11.1924 PubMedCrossRefGoogle Scholar
  29. Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psychology, 9, 1–27.CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2010

Authors and Affiliations

  • Hidehito Honda
    • 1
    • 4
  • Keiga Abe
    • 2
  • Toshihiko Matsuka
    • 1
  • Kimihiko Yamagishi
    • 3
  1. 1.Chiba UniversityChibaJapan
  2. 2.Aoyama Gakuin UniversityTokyoJapan
  3. 3.Tokyo Institute of TechnologyTokyoJapan
  4. 4.Department of Cognitive and Information ScienceChiba UniversityChibaJapan

Personalised recommendations