Behavior Research Methods

, Volume 49, Issue 6, pp 2163–2172 | Cite as

Normative data for Chinese compound remote associate problems

  • Ching-Lin Wu
  • Hsueh-Chih Chen


The Remote Associates Test (RAT) is a well-known measure of creativity, with each item on the RAT is composed of three unrelated stimulus words. The participant’s task is to find an answer in the form of a word that could combine with each of the stimulus words, thus forming three new actual nouns. Researchers have modified the RAT to develop compound remote associate problems that emphasize combining vocabulary to form compound words. In the field of creativity research for Mandarin speakers, the Chinese RAT has been widely applied for over 10 years. The original RAT, compound remote associate problems, and Chinese RAT have various common advantages, such as being convenient to use and having objective scoring; additionally, the development of items for certain tests is easy and satisfies the requirements of psychological assessments in terms of the quantity of items. Currently, many language editions of the RAT and compound remote associate problems already exist. In particular, the English and Italian versions of these tests already have derived normative data. Because approximately 20% of the world’s population are native Mandarin speakers, and because increasing numbers of people are choosing Mandarin as a second language, the need to increase Mandarin-language resources is growing; however, normative data for the Chinese RAT still do not exist. To address this issue, in the present study we developed Chinese compound remote associate problems and analyzed the passing rates by items, problem solving times, and various normative data, using the responses of 253 subjects in three experiments.


Creativity Remote association Problem solving Mandarin 

The Remote Associates Test (RAT) is a widely used tool for measuring creative potential (Mednick, 1968). The RAT’s design is based on the creativity association theory, which claims that individuals with higher creativity will tie interdependent elements into various novel and creative outcomes (Mednick, 1962). Mednick developed written material to measure one’s associative abilities according to the association theory. Each item is composed of three different stimulus words—for example, “rat,” “blue,” and “cottage”—and the participant’s task is to find a single answer word that combines with the three stimuli to produce three actual concepts. The answer to this example is “cheese,” because cheese can combine with the three stimuli to form rat cheese, blue cheese, and cottage cheese. The RAT has two editions, and each one has 30 items. Performance on the RAT is positively correlated with creativity (Datta, 1964; Mednick, 1963), indicating its usefulness as a tool for assessing this ability.

Later, Bowden and Jung-Beeman (2003b) investigated the items on the RAT (Mednick & Mednick, 1967) and concluded that each item had different concepts of association. For ex-ample, if the stimuli were “same,” “head,” and “tennis,” then the answer would be “match.” In particular, “same” and “match” have similar meanings, and “match head” and “tennis match” are both compound words. Bowden and Jung-Beeman (2003b) believed that the design was faulty if an item used two or more forms of association. Therefore, they used compound-word methods to develop items, and ultimately created compound remote associate problems with 144 items, which are used especially in neurocognitive research.

Whether in regard to the RAT or the compound remote associate problems, the strengths of any test depend on the following factors: How time-consuming it is to complete, how easy it is to use, the inclusion of close-ended answer choices, and the use of objective scoring (Huang, Chen, & Liu, 2012). When participants answer correctly, each item takes approximately 10 s (Fleck & Weisberg, 2004). Furthermore, if an effective vocabulary database existed, a new edition of the test could more easily be developed by selecting material from the database, thereby decreasing practice effects. The convenience of mass production would satisfy the need to produce a large volume of items for psychological experiments. The RAT and compound remote associate problems are widely applied in various research domains, such as assessments on creative potential (Ansburg & Hill, 2003; Baer & Kaufman, 2008; Brown, Dutton, & Cook, 2001), diagnosis of mental illness (Heatherton & Vohs, 2000; Vohs & Heatherton, 2001), neural mechanisms of creative processes (Bowden & Jung-Beeman, 2003a; Weinstein & Graves, 2001, 2002), analysis of potential traits for remote associates tests (Chen & Wu, 2014; Gianotti, Mohr, Pizzagalli, Lehmann, & Brugger, 2001; Kaufmann, 2003), and the effects of incubation on creativity (Cai, Mednick, Harrison, Kanady, & Mednick, 2009). Additionally, these two tests have been translated into a variety of languages, including Chinese, Japanese, Italian, Jamaican, and Hebrew (Baba, 1982; Chang, Wu, Chen, Wu, 2016; Jen, Chen, Lien, Cho, 2004; Hamilton, 1982; Huang et al., 2012; Nevo & Levin, 1978; Salvi, Costantini, Bricolo, Perugini, & Beeman, 2016), thereby providing a way to assess the creative potential of different language speakers.

A Chinese version of the RAT was initiated by Jen et al. (2004) when they integrated Mednick & Mednick’s (1967) design with Bowden and Jung-Beeman’s design (2003b). Jen et al. considered the differences between the linguistic rules and attributes of Chinese and English and used the “character pairing” method, which is more suitable for Chinese, rather than “compound words,” which is more suitable for English. Thus, a Chinese Remote Associates Test (CRAT) was developed. Each item on the CRAT contains three stimuli, as well—for example, 生 (“to generate”), 天 (“the sky”), and 溫 (“warm”)—and participants’ are tasked with finding a target word that could pair with the stimulus words to create three actual two-character words. The answer to this example is 氣 (“air”); the resulting three actual two-character words are 生氣 (“anger”), 天氣 (“weather”), and 氣溫 (“temperature”). Therefore, the design of linking concepts in the CRAT is similar to the design of compound remote associate problems, suggesting that the method of linking stimuli to a target word was to create a compound word. Additionally, various empirical studies have indicated a positive relationship between the usage frequency and linkage possibility of target words. When creating associations from the stimuli in a forward direction to answer the question above, it is easier to solve the task by creating associations between 天 (“the sky”) and 氣 (“air”), to create 天氣 (“the weather”; Chen & Wu, 2014). To address this issue, when Jen et al. (2004) designed the items on the CRAT, they controlled for the usage frequency of compound words and the associative direction (forward, backward). The compound words are composed of stimulus words and target words, and the associative direction refers to the order of stimulus words and target words. The four types of usage frequency of compound words are as follows: HHH, HHL, HLL, and LLL (H, high; L, low); the four types of associative direction are as follows: BBB, BBF, BFF, and FFF (B, backward; F, forward). By designing the items to represent an average of these types, the difficulty level of the items on the entire test would be balanced. The CRAT has an A set and a B set, with each set comprising 30 items. The test is scored by assigning one credit per correct answer and no credit per wrong answer. The total score represents an individual’s remote association ability. The CRAT has been generally used among Mandarin speakers (Chen, Peng, Tseng, & Chiou, 2008; Chiu & Yau, 2010).

Currently, the CRAT is administered in paper-and-pencil format (Jen et al., 2004; Chen, Peng, & Wu, 2011), and participants are able to view all of the items at once and complete the test within a time limit, such as completing 30 items in 15 min. Currently, the CRAT is different from other cognitive experiments that only present one item at a time. Even if the total time limit were the same for the CRAT and these cognitive experiments, participants could spend a different amount of time on each item with the paper and pencil format, strategically passing harder items and spending more time on easier ones. However, in the lab, every item is presented one at a time, and the time limit for each item is the same. Even if participants solved an item quickly, they would have to wait to move on to the next item. Therefore, it is necessary to establish normative data for paper and pencil editions and computerized editions.

Since one of the aims of the computerized edition of the test is to understand the mechanisms of creativity, we investigated participants’ behavioral response during problem solving. By collecting data on participants’ problem-solving process for every item, we could discuss why some items were more difficult and why some people could not correctly solve certain problems. Empirical research found that participants generally used the following problem solving strategy: start with one stimulus to link other possible words, and cross compare the possible words to the other two stimuli to come up with the correct answer (Smith, Huber, & Vul, 2013). Additionally, participants extended the association from the previous item by searching through the stimuli (Smith et al., 2013). Participants’ response time also provided important information on the process of mental reasoning (Hsieh, 2013; Posner, 1978) and showed that remote association items generally took more time to answer (Collins & Loftus, 1975; Gruszka & Necka, 2002), and more difficult items on the RAT took more time.

Since the content on the CRAT included Chinese characters, the difficulty level of the items was related to the features of Chinese characters (Chen & Wu, 2014). Chen and Wu initially used a logistic latent trait model with linear constraints to investigate the influence of item components on the difficulty of the CRAT. The results showed that polyphonic words, associations in a backward direction, low-usage frequency of target words, and higher-linkage words were the reasons for an item’s difficulty. Moreover, the attributes of words—for example, syntactical functions—may also influence an item’s difficulty, but currently, the research on it is still scarce. Therefore, it is worthwhile to discuss various characteristics of the Chinese language. Some Chinese characters are heteronyms, and when heteronyms combine with other characters to form compound words, the pronunciation and meanings change. For example, the pronunciation of the heteronymous character Open image in new window is different for Open image in new window ( Open image in new window Open image in new window ˊ) Open image in new window (xíngrén “the pedestrian”) and Open image in new window ( Open image in new window ˊ) Open image in new window (hángyè “the industry”). The meanings of Open image in new window Open image in new window and Open image in new window Open image in new window are also different. In summary, when the target words on the CRAT are heteronyms, participants need to have prior knowledge that certain words have two or more pronunciations and meanings, which then leads to linking possible answers.

Compound remote associate problems are also a type of insight problem (Bowden & Jung-Beeman, 2003b). The primary reason for this is that compound remote associate problems and insight problems share similar processes, including the following: (1) Participants experience incorrect retrieval; (2) participants usually are unable to describe how they came up with the answers (Ben-Zur, 1989); (3) when problem solving, an “a-ha” feeling happens. On the other hand, according to Wakefield’s definition of creativity problem types (Wakefield, 1992), both the RAT and insight problems have the pattern of open-ended questions with close-ended response options. A study showed that performance on the RAT and performance on insight problems are moderately correlated (r = .40 ~ .50; Huang et al., 2012). Therefore, compound remote associate problems provide an understanding of participants’ remote association abilities as well as their performance on insight problems. Weisberg (1995) suggested two types of insight problems, pure and pseudo, and argued that an actual insight problem could only be solved by restructuring the initial representation of the problem. It could not be solved through any algorithms or trial-and-error, and an entirely new perspective is the only solution (Knoblich, Ohlsson, Haider, & Rhenius, 1999). Additionally, the phonic transfer of heteronyms is indeed a type of problem representation transfer. Chen et al. (2011) first used Chinese heteronyms in the CRAT and found that items with heteronyms were more difficult than items without heteronyms, supporting the finding that insight problems have different processes.

Individual differences and test performance are closely related. Since the test’s content is presented in Chinese characters, prior knowledge of the Chinese language is critical and can at times be confounded with the item’s difficulty. Previously, researchers designed the items by choosing high usage frequency compound words for the content and hoped it would eliminate the influence of prior knowledge (Jen et al., 2004). However, no empirical data exists to suggest that performance on the CRAT and prior knowledge of a language or intelligence are related. Furthermore, performance on compound remote associate problems does not differ by gender (Salvi et al., 2015), and gender does not influence performance on compound remote associate problems.

Despite the development of the CRAT and its two editions (Jen et al., 2004), it still needs a greater volume of items to assess the process of creativity and collect stable data. Furthermore, the current CRAT (Jen et al., 2004) does not include items with heteronyms, which restricts the generalizability of the test results. To address this issue, in the present study we aimed to develop a new set of items with heteronyms and increase the number of items to be sufficient for research on related topics. To distinguish this new set from the current version (Jen et al., 2004) and to highlight its design, which is similar to compound remote associate problems, the new RAT was named the “Chinese compound remote associates test” (CCRAP). Moreover, in the present study also attempted to develop different formats of the CCRAP (paper and pencil, computerized) with time limits (20 s and 30 s per item). The study also collected and analyzed normative data, such as passing rates, response times for correct answers and others, in hopes of contributing to future and ongoing studies, especially related to neurocognitive research.



The present study had 253 participants and collected data through three experiments: a paper-and-pencil questionnaire administered to the group, behavioral experiment administered individually, and functional magnetic resonance imaging (fMRI) experiment administered individually. The paper-and-pencil questionnaire was completed by 89 participants, including 39 males and 50 females; the behavioral experiment was completed by 93 participants, including 43 males and 51 females; the fMRI experiment was completed by 71 participants, including 34 males and 37 females. The age range of all of the participants was from 18 to 34 years old (mean = 22.67, SD = 3.24), with years of education ranging between 12 and 18 (mean = 15.87, SD = 1.96). All of the participants were native Mandarin speakers and right-handed. Every participant acknowledged the experiments and agreed with the informed consent before the experiments began.


The design of the Chinese compound remote associate problems (CCRAP) was based on the RAT by Mednick and Mednick (1967) and refers to the formats of both the compound remote associate problems (Bowden & Jung-Beeman, 2003b) and Chinese Remote Associates Test (CRAT; Jen et al., 2004). Word pairing was used to develop the items on the CCRAP. Each item comprised three stimulus words, for example, 今 (“current”), 輕 (“light”), and 去 (“go”), and participants’ task was to come up with an answer, a target word, that could combine with all three stimuli to create three actual two-character words. The answer was 年 (“year”), and the actual two-character words were 今年 (“this year”), 年輕 (“young”), and 去年 (“last year”). Researchers developed 120 items for the CCRAP. We used the Mandarin Word Frequency Statistics Report (Ministry of Education, Taiwan, 1998) as the vocabulary source for the test. In the present study, we defined a “word” to be a two-character compound word that has actual meaning for real use; all the words in the present study are included in Revised Mandarin Dictionary and find its definition. To ensure participants’ associative abilities while completing the questions, for the present study selected stimulus words that could link to 20 or more actual compound two-characters words, (i.e., the number of two-character compound words were ranked from one to three in increasing order, meaning that participants could associate every stimulus to at least 20 actual two-character compound words). Furthermore, to ensure the equal possibility of associating the stimuli to the target word, the study also controlled for the usage frequency of various vocabulary words triggered by the stimuli in similar situations (mean = 31.65). Finally, to avoid response bias and maintain the item difficulty, the study also controlled for the associative directions in the ratio of 3:5, so that the entire set of CCRAP was composed of 45 forward association items and 75 backward association items. Specifically, forward association items were organized by the stimulus followed by the target word. With the aforementioned example, for instance, the stimulus 今 was associated with the target word 年, to create the two-character word 今年. In this two-character word, 年 followed 今, making it a forward association. On the contrary, a backward association meant that the target word preceded the stimulus. To continue the aforementioned example, participants associated the stimulus 輕 with the target word 年 and combined them into 年輕. Since the target word 年 preceded the stimulus word 輕, it was considered a backward association. To avoid response bias, we designed each item with stimuli with different associative directions, and the order in which the stimuli was presented was random. Furthermore, to understand the influence of phonic change on item difficulty and problem solving strategy, we designed 40 heteronymous items. For example, the stimuli were 觀 (“view”), 器 (“instrument”) and 配 (“match”), and the target word was 樂 (“happy/music”). By combining the stimuli and the target word, the actual two-character compound words were 樂觀 (“optimist”), 樂器 (“musical instrument”), and 配樂 (“incidental music”). In particular, 樂 was pronounced “ Open image in new window Open image in new window `” (lè) when used in the noun 樂觀, but it was pronounced “ Open image in new window Open image in new window `” (yuè) when used in the nouns 樂器 and 配樂. To answer these heteronymous items, participants first thought of the target word and then associated other pronunciations of the word with the stimuli to ultimately come up with solutions.


Our data were collected from three experiments: a group experiment conducted via paper and pencil, individual behavioral experiment conducted via computer, and an individual fMRI experiment. The passing rate of 120 items in each of the experiments and response times for each item in the individual behavioral experiment and fMRI experiment were collected. The procedure is described below.

First, regarding the individual behavioral experiment, computer monitors displayed each item one at a time, allotting 30 s per item. When participants came up with a solution, they pressed a button to move to an answer page on which they typed their answers. Then, participants moved on to the next item. If participants failed to come up with an answer within time limit, then they moved directly on to the next item. The experiment contained six blocks, with each block comprising 20 items for a total of 120 items. Participants were provided with a 3-min resting period after each block, and the entire test lasted a total of 72 min.

Second, similar to the computerized behavioral experiment, the fMRI experiment consisted of six blocks with 20 items per block, leading to a total of 120 items as well. The primary difference between the experiments was the duration of item display. In this experiment, each item displayed for 20 s to minimize any discomfort and distraction due to the fMRI scan. Additionally, unlike the paper-and-pencil questionnaire, participants pressed a button to access the answer page and then orally reported their answer. A researcher then marked the participants’ response on the answer sheet and instructed them to move on to the next item. If participants failed to give an answer within the time limit, the answer page would still be displayed, but the researcher would then close the answer page and move on to the next item. Furthermore, in the fMRI experiment, the results of the WAIS-III verbal subscale were collected from the participants.

Finally, regarding the paper-and-pencil questionnaire group, all of the items were randomly divided into three sets, with each set containing 40 items. All of the participants completed the three sets at three different times, and the order of the sets was counterbalanced. Before each test started, the researcher explained the instructions and set a time limit of 20 min per 40 items.

In each of the three conditions, the researchers described the instructions, provided two items as a trial run, and explained the inclusion of various heteronymous items to ensure that participants would consider heteronymous words as well.

Statistical analysis

This study aimed to establish normative data using CCRAP for research on creativity. We designed 120 items and collected the passing rate and response times on the paper-and-pencil questionnaire, both with the 20-s response time limit condition and 30-s response time limit condition. To understand the response patterns of each experiment, we compared the passing rates of the three experiments, the response times for correct responses and the percentages of 20-s and 30-s time limit conditions, respectively. Furthermore, to assess the influence of word attributes (such as heteronymous target words, syntactical functions) on response patterns, we compared the passing rates and response times on heteronymous items and identical words items. We also investigated the influence of syntactical functions (verb and noun) of compound words on passing rates and response times. Finally, we assessed the influence of various individual differences, such as gender and intelligence, on passing rates.

Results and discussion

We assessed the passing rates on each item through group testing and individual testing. We also assessed response times and passing rate percentages for the 20-s time limit condition and 30-s time limit condition. Before the statistical analysis, we checked the data and deleted the items with a 0% passing rate in both the group and individual testing groups, as well as the items in which participants found the answers by referring to other items. After deleting these items, we included the remaining 90 items in the statistical analysis, with 30 of the 90 items using heteronyms.

First, we analyzed the passing rates of the three experiments and calculated the differences and correlations of passing rates between the two time limit conditions. The results of the variance analysis indicated no difference among the passing rates of the three experiments [F(2, 178) = 0.50, p = .61, η 2 = .006] and equal item difficulty levels across the three experiments. Moreover, a paired-samples t test indicated that the response times were significantly different between the 20-s time limit condition and the 30-s time limit condition [t(89) = 12.95, p < .001, d = 2.75]. The response times required for correct answers were greater for the 30-s time limit condition (mean = 15.31, SD = 4.14) than the 20-s time limit condition (mean = 9.77, SD = 2.17). Regarding the percentage of response times as part of the total response time (20 s: 48.82%; 30 s: 51.08%), no difference was found between the two time limit conditions [t(89) = 1.44, p = .15, d = 0.30]. The analysis showed that response times for correct answers were different between the two time limit conditions, but for both conditions, it took half of the time limit.

Table 1 lists the correlation between each experiment’s passing rate and each time limit condition. The results indicated a high correlation between the passing rates of the 20-s time limit condition and 30-s time limit condition (r = .86, p < .01). This finding was similar to that of Bowden and Jung-Beeman (2003b), who utilized an English version of compound remote associate problems to produce results showing a strong correlation between the passing rates of the 15-s time limit and 30-s time limit conditions (r = .83). Nevertheless, the passing rate of the paper-and-pencil questionnaire group and those of the individual computerized questionnaire groups were not obviously correlated (rs = .15 ~ .17, ps > .10), signifying that the testing mode did not influence the passing rate. Its correlation was not strong, either, which signified different response patterns in group testing and individual testing. The possible reason for this finding could have been time management; when an individual was tested, the response time was the same for all of the items, and participants needed to respond within a fixed time limit for each item. However, with group testing, participants could organize the time spent on each item, allotting more time for easier than for difficult items.
Table 1

Correlation of passing rates in the three experiments


Paper and Pencil

20-s Time Limit

30-s Time Limit

Paper and pencil


20-s time limit


30-s time limit



** p < .01

Subsequently, we analyzed the influences of items’ attributes on passing rates and response times. Mandarin had heteronymous words. For example, combining 會 and 計 created 會計, in which 會 was pronounced Open image in new window . However, if we combined 會 and 員 to become 會員, 會 was pronounced Open image in new window . Therefore, for a heteronymous item, participants needed more time in order to create an association and restructure the initial representation of the problem with another pronunciation to ultimately come up with a solution (Weisberg, 1995). In short, heteronymous items were more difficult. In the present study, 30 of the 90 items were heteronymous. To assess the difficulty of the heteronymous items, we selected 30 items from the remaining 60 items, which were nonheteronyms. We calculated the usage frequencies of the two sets, the heteronymous items and nonheteronymous items, and found that they were not different [t(58) = 0.08, p = .97, d = 0.02]. We then compared the passing rates and response times for these two sets. Table 2 lists the results. We found that the passing rate on heteronymous items was lower for the 20-s time limit condition [t(58) = 2.24, p = .03, d = 0.59], and a similar finding applied to the 30-s time limit condition [t(58) = 2.13, p = .04, d = 0.56]; however, the passing rates on the two sets were not different in the paper-and-pencil questionnaire group [t(58) = 0.76, p = .45, d = 0.20]. Regarding the response times for correct answers to the two sets, we found no difference in either the 20-s time limit condition [t(58) = –1.13, p = .27, d = –0.30] or the 30-s time limit condition [t(58) = –0.94, p = .35, d = –0.25]. This result indicated that heteronymous items were more difficult in the individual-testing condition, but there were no significant differences between the two sets in the group-testing condition. The times required to provide a correct answer were not different, either.
Table 2

The t test results of passing rates and response times for heteronymous items and nonheteronymous items













































Regarding the influence of the syntactical functions of the compound words, the 90 items produced 172 nouns and 98 verbs as compounds. We analyzed the influence of the number of nouns on passing rates and response times and found that the number of nouns positively influenced the passing rate only in the 20-s time limit condition [β = .22, t(88) = 2.08, p = .04]. It is easier to create associtions if the compound word was a noun, esspecially during 20-s time limit.

Finally, we compared the influences of gender and intelligence. The passing rates did not differ by gender in any of the experiments or time limit conditions (|ts| ≤ 1.45, ps > .15, |ds| ≤ 0.38; see Table 3). Gender was not a factor in the CCRAP, and this finding was consistent with results from the Italian version (Salvi et al., 2015) of the compound remote associate problems. Both genders were found to have equal abilities with CCRAP.
Table 3

T test results of passing rates by gender in all conditions












Paper and pencil









20-s time limit









30-s time limit









We also attempted to find a correlation between passing rates on the CCRAP and scores on the WAIS-III Verbal subscale, but no significant differences were found (r = .05, p = .65, n = 71). Previously it had been assumed that verbal intelligence influenced performance on the CCRAP, since the test was on Chinese vocabulary. However, our findings did not support this assumption, and performance on the CCRAP and verbal abilities were found to be independent mechanisms.

General conclusion

Creativity is strongly related to the development of civilization and technology improvement (Wei et al., 2014). Creativity results in significant changes and everyday life conveniences for individuals. Therefore, the evaluation of creativity has always been a research topic of interest. The Chinese RAT has adequate reliability and validity, is economical to use and has objective scoring (Jen et al., 2004). It is used to evaluate the creative potential of individuals and discuss the process of creativity. For over a decade, the Chinese RAT has been used among Mandarin speakers; however, only the paper-and-pencil format has been available. One significant difference between the paper-and-pencil questionnaire and the computerized questionnaire is that the former shows all of the items at one time, but the latter displays items one at a time. Therefore, by using the computerized questionnaire, we can record behavioral or physiological responses during the problem solving process, as well as assess passing rates and response times. The English edition and Italian edition of compound remote associate problems already have available normative data (Bowden & Jung-Beeman, 2003b; Salvi et al., 2015). Mandarin is a language spoken by one in every five people and is the second-most learned language in the world, but normative data on the Chinese RAT does not exist. To improve the generalizability of creativity research in Mandarin, in the present study we developed 90 items from the CCRAP and assessed the passing rates and response times for paper-and-pencil questionnaires and for computerized questionnaires with 20-s and with 30-s time limits. To contribute to future neurocognitive research, such as research related to magnetoencephalography (MEG) and fMRI, we also assessed the brain mechanisms of creativity in Mandarin speakers and compared the findings with those from the English version of the current compound remote associate problems (i.e., Jung-Beeman et al., 2004; Kounios et al., 2006; Razumnikova, 2007; Subramaniam, Kounios, Parrish, & Jung-Beeman, 2009) as a type of cross-culture comparison. We attempted to find the ethnic and cultural differences and commonalities in producing creative concepts.

The Appendix shows the normative data for the CCRAP. Also shown in the Appendix are the following factors related to the items’ content and related attributes: amounts of syntactical functions (verbs/nouns), heteronyms or nonheteronyms, passing rates of the paper-and-pencil questionnaires, and means and standard deviations of response times for the computerized questionnaires with 20-s and 30-s time limits. Future creativity research could select the item sets according to the method of testing. For example, future research using paper-and-pencil questionnaires could refer to the data from the paper-and-pencil format, such as the passing rate, and select the items by difficulty. For future research involving neurocognitive experiments in need of more items, such as research with MEG or fMRI, the researcher could refer to the passing rates of the 20-s time limit condition, select the items, and properly arrange them into each block. For future behavioral experiments, the data from both the 20-s and 30-s time limit conditions could serve as reference for selecting the items for research.


Author note

This research was partially supported by the “Aim for the Top University Project,” and by the Center of Learning Technology for Chinese at National Taiwan Normal University (NTNU), sponsored by the Ministry of Education, Taiwan, R.O.C., and the International Research-Intensive Center of Excellence Program of NTNU and the Ministry of Science and Technology, Taiwan, R.O.C. under Grant No. MOST 104-2911-I-003-301. We also thank the Ministry of Science and Technology, Taiwan, R.O.C., for funding this study, through projects under “The Mechanism of Remote Association and Representation Change in the Process of Creativity” (MOST103-2420-H-003-023-DR).


  1. Ansburg, P., & Hill, K. (2003). Creative and analytic thinkers differ in their use of attentional resources. Personality and Individual Differences, 34, 1141–1152.CrossRefGoogle Scholar
  2. Baba, Y. (1982). An analysis of creativity be means of the remote associates test for adults revised in Japanese (Jarat Form-A). Japanese Journal of Psychology, 52, 330–336.CrossRefGoogle Scholar
  3. Baer, J., & Kaufman, J. C. (2008). Gender differences in creativity. Journal of Creative Behavior, 19, 143–146.Google Scholar
  4. Ben-Zur, H. (1989). Automatic and directed search processes in solving simple semantic memory problems. Memory & Cognition, 17, 617–626.CrossRefGoogle Scholar
  5. Bowden, E. M., & Jung-Beeman, M. (2003a). Aha! Insight experience correlates with solution activation in the right hemisphere. Psychonomic Bulletin & Review, 10, 730–737. doi: 10.3758/BF03196539 CrossRefGoogle Scholar
  6. Bowden, E. M., & Jung-Beeman, M. (2003b). Normative data for 144 compound remote associate problems. Behavior Research Methods, Instruments, & Computers, 35, 634–639. doi: 10.3758/BF03195543 CrossRefGoogle Scholar
  7. Brown, J. D., Dutton, K. A., & Cook, K. E. (2001). From the top down: Self-esteem and self-evaluation. Cognition and Emotion, 15, 615–631.CrossRefGoogle Scholar
  8. Cai, D. J., Mednick, S. A., Harrison, E. M., Kanady, J. C., & Mednick, S. C. (2009). REM, not incubation, improves creativity by priming associative networks. Proceedings of the National Academy of Sciences, 106, 10130–10134. doi: 10.1073/pnas.0900271106 CrossRefGoogle Scholar
  9. Chang, Y. L., Wu, J. Y., Chen, H. C., & Wu, C. L. (2016). The Development of Chinese Radical Remote Associates Test. Psychological Testing, 63, 59–81.Google Scholar
  10. Chen, H. C., Peng, S. L., Tseng, C. C., & Chiou, H. W. (2008). An exploratory study of the relation between the average saccade amplitude and creativity under the eyetracker mechanism. Bulletin of Educational Psychology, 39, 127–149.Google Scholar
  11. Chen, H. C., Peng, S. L., & Wu, Q. L. (2011). The creative problem solving processes of pure and pseudo insight problems in Chinese Remote Association Test. Journal of Chinese Creativity, 2, 25–51.Google Scholar
  12. Chen, H. C., & Wu, Q. L. (2014). The component analysis of chinese remote associates test in linear logistic latent trait model. Journal of Chinese Creativity, 5(1), 51–63.Google Scholar
  13. Chiu, F. C., & Yau, F. Y. (2010). The effects of regulatory focus and temporal distance to the goal on creativity. Bulletin of Educational Psychology, 41, 497–520.Google Scholar
  14. Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review, 82(6), 407–428.Google Scholar
  15. Datta, L. E. (1964). Remote associates test as a predictor of creativity in engineers. Journal of Applied Psychology, 48, 183.CrossRefGoogle Scholar
  16. Fleck, J. I., & Weisberg, R. W. (2004). The use of verbal protocols as data: An analysis of insight in the candle problem. Memory & Cognition, 32, 990–1006.CrossRefGoogle Scholar
  17. Gianotti, R. R., Mohr, C., Pizzagalli, D., Lehmann, D., & Brugger, P. (2001). Associative processing and paranormal belief. Psychiatry and Clinical Neurosciences, 55, 595–603.CrossRefPubMedGoogle Scholar
  18. Gruszka, A., & Necka, E. (2002). Priming and acceptance of close and remote associations by creative and less creative people. Creativity Research Journal, 14(2), 193–205.Google Scholar
  19. Hamilton, M. A. (1982). “Jamaicanizing” the Mednick Remote Associates Test of creativity. Perceptual and Motor Skills, 55, 321–322.CrossRefGoogle Scholar
  20. Heatherton, T. F., & Vohs, K. D. (2000). Interpersonal evaluations following threats to self: Role of self-esteem. Journal of Personality and Social Psychology, 78, 725–736.CrossRefPubMedGoogle Scholar
  21. Hsieh, S. L. (2013). From mental chronometry to chronopsychophysiology: The marriage of mental chronometry and event-related potential. Chinese Journal of Psychology, 55, 255–276.Google Scholar
  22. Huang, P. S., Chen, H. C., & Liu, C. H. (2012). The development of Chinese word remote associates test for college students. Psychological Testing, 59, 581–607.Google Scholar
  23. Jen, C. H., Chen, H. C., Lien, H. C., & Cho, S. L. (2004). The development of the Chinese remote association test. Research in Applied Psychology, 21, 195–217.Google Scholar
  24. Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J.L., Arambel-Liu, S., Greenblatt, R., . . . Kounios, J. (2004). Neural activity when people solve verbal problems with insight. PLoS Biology, 2, e97.Google Scholar
  25. Kaufmann, D. (2003). What to measure? A new look at the concept of creativity. Scandinavian Journal of Educational Research, 47, 235–251.CrossRefGoogle Scholar
  26. Knoblich, G., Ohlsson, S., Haider, H., & Rhenius, D. (1999). Constraint relaxation and chunk decomposition in insight problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1534–1555. doi: 10.1037/0278-7393.25.6.1534 Google Scholar
  27. Kounios, J., Frymiare, J. L., Bowden, E. M., Fleck, J. I., Subramaniam, K., Parrish, T. B., & Jung-Beeman, M. (2006). The preparedmind: Neural activity prior to problem presentation predicts subsequent solution by sudden insight. Psychological Science, 17, 882–890.CrossRefPubMedGoogle Scholar
  28. Mednick, S. A. (1962). The associative basis of the creative process. Psychological Review, 44, 220–232.CrossRefGoogle Scholar
  29. Mednick, S. A. (1968). The remote associates test. Journal of Creative Behavior, 2, 213–214.Google Scholar
  30. Mednick, M. T. (1963). Research creativity in psychology graduate students. Journal of Consulting Psychology, 27, 265–266.CrossRefPubMedGoogle Scholar
  31. Mednick, S. A., & Mednick, M. T. (1967). Examiner’s manual, Remote Associates Test. Boston, MA: Houghton Mifflin.Google Scholar
  32. Nevo, B., & Levin, I. (1978). Remote Associates Test: Assessment of creativity in Hebrew. Megamot, 24, 87–98.Google Scholar
  33. Posner, M. I. (1978). Chronometric explorations of mind. Hillsdale, NJ: Erlbaum.Google Scholar
  34. Razumnikova, O. M. (2007). Creativity related cortex activity in the remote associates task. Brain Research Bulletin, 73, 96–102.CrossRefPubMedGoogle Scholar
  35. Salvi, C., Bricolo, E., Franconeri, S., Kounios, J., & Beeman, M. (2015). Sudden insight is associated with shutting down visual inputs. Psychonomic Bulletin & Review, 22, 1814–1819. doi: 10.3758/s13423-015-0845-0 CrossRefGoogle Scholar
  36. Salvi, C., Costantini, G., Bricolo, E., Perugini, M., & Beeman, M. (2016). Validation of Italian rebus puzzles and compound remote associate problems. Behavior Research Methods, 48, 664–685. doi: 10.3758/s13428-015-0597-9 CrossRefPubMedGoogle Scholar
  37. Smith, K. A., Huber, D. E., & Vul, E. (2013). Multiply-constrained semantic search in the Remote Associates Test. Cognition, 128, 64–75. doi: 10.1016/j.cognition.2013.03.001 CrossRefPubMedGoogle Scholar
  38. Subramaniam, K., Kounios, J., Parrish, T. B., & Jung-Beeman, M. (2009). A brain mechanism for facilitation of insight by positive affect. Journal of Cognitive Neuroscience, 21, 415–432.CrossRefPubMedGoogle Scholar
  39. Vohs, K. D., & Heatherton, T. F. (2001). Self-esteem and threats to self: Implications for self-construals and interpersonal perceptions. Journal of Personality and Social Psychology, 81, 1103–1118.CrossRefPubMedGoogle Scholar
  40. Wakefield, J. F. (1992). Creative thinking: Problem solving skills and the art orientation. Norwood, NJ: Ablex.Google Scholar
  41. Wei, D., Yang, J., Li, W., Wang, K., Zhang, Q., & Qiu, J. (2014). Increased resting functional connectivity of the medial prefrontal cortex in creativity by means of cognitive stimulation. Cortex, 51, 92–102.CrossRefPubMedGoogle Scholar
  42. Weinstein, S., & Graves, R. E. (2001). Creativity, schizotypy, and laterality. Cognitive Neuropsychiatry, 6, 131–146.CrossRefGoogle Scholar
  43. Weinstein, S., & Graves, R. E. (2002). Are creativity and schizotypy products of a right hemisphere bias? Brain and Cognition, 49, 138–151.CrossRefPubMedGoogle Scholar
  44. Weisberg, R. W. (1995). Prolegomena to theories of insight in problem solving: A taxonomy of problems. In R. J. Sternberg & J. E. Davidson (Eds.), The nature of insight (pp. 157–196). Boston, MA: MIT Press.Google Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  1. 1.Department of Educational Psychology and CounselingNational Taiwan Normal UniversityTaipei CityRepublic of China

Personalised recommendations