Recent years have seen a surge of interest in the topic of so-called epistemic emotions. Epistemic emotions refer to a group of emotions which are related to the knowledge-generating qualities of cognitive tasks and activities (Brun, Doğuoğlu, & Kuenzle, 2008; Muis, Chevrier, & Singh, 2018). These emotions typically include surprise, curiosity and interest. Recent studies have revealed that these epistemic emotions have profound implications for cognitive processing and learning. Surprise is caused by the discrepancy between expected and actual outcomes, and this discrepancy (which can be described as ‘prediction error’) is the basis of learning and decision-making (Dole & Sinatra, 1998; Rescorla & Wagner, 1972). Stahl and Feigenson (2015) showed that even 11-month-old infants can learn from events better when their expectation was violated (i.e., they were surprised). A number of recent studies have also found that the strength of the feeling of curiosity or interest triggered by the presentation of trivia questions predicts memory accuracy of the answers to the questions (Fastrich, Kerr, Castel, & Murayama, 2017; Kang et al., 2009; Marvin & Shohamy, 2016; McGillivray, Murayama, & Castel, 2015; Wade & Kidd, 2019) as well as that of irrelevant materials that were incidentally presented (Galli et al., 2018; Gruber, Gelman, & Ranganath, 2014).

One of the challenges of research on epistemic emotions is that it is not easy to induce these epistemic emotions in experimental settings. In controlled experiments, including those with neuroscientific facilities such as functional magnetic resonance imaging (fMRI), researchers need a number of short repeated trials to ensure the reliability of the task. However, triggering epistemic emotions in such a short time frame is challenging because epistemic emotions, by definition, require people to cognitively process the task; epistemic emotions are not something that can be immediately triggered upon presenting a stimulus. In addition, even if it is possible to induce epistemic emotions in a short time frame, the magnitude of the emotion is likely to be insufficient to cause a psychological response and/or behavioral change. Thus, there are limited experimental materials that claim to induce epistemic emotions. In fact, the vast majority of studies in experimental psychology use trivia questions or similar knowledge questions to induce epistemic emotions, particularly curiosity (Baranes, Oudeyer, & Gottlieb, 2015; Kang et al., 2009; Litman, Hutchins, & Russon, 2005; Metcalfe, Schwartz, & Bloom, 2017; Murayama & Kuhbandner, 2011)Footnote 1.

In the current study, we have validated a stimulus set called Magic Curiosity Arousing Tricks (MagicCATs): a collection of 166 novel short magic trick video clips that trigger people's epistemic emotions in experimental settings. The videos of MagicCATs are available for researchers (see "Stimulus Availability" section in Methods) and the current article reports basic characteristics and the norms of these magic trick video clips. There are already several studies that have used magic tricks as stimuli to trigger epistemic emotions, including neuroimaging studies (e.g., Parris, Kuhn, Mizon, Benattayallah, & Hodgson, 2009, Danek, Öllinger, Fraps, Grothe, & Flanagin, 2015), but to the best of our knowledge, there are no standardized stimuli that are freely available to researchers. In contrast to other materials, one unique aspect of magic tricks is that they induce a strong sense of violation of expectation and surprise (Danek et al., 2015). Surprise and curiosity/interest are obviously interrelated epistemic emotions (Pekrun, Vogl, Muis, & Sinatra, 2017), but the materials available thus far (e.g., trivia questions) are not designed to induce surprise to evoke curiosity (for an exception, see Vogl, Pekrun, Murayama, & Loderer, 2019; Vogl, Pekrun, Murayama, Loderer, & Schubert, 2019). Perhaps as a consequence of this curiosity research tends to focus on uncertainty as a major triggering factor (e.g., van Lieshout, Vandenbroucke, Müller, Cools, & de Lange, 2018), and the role of surprise in relation to curiosity and interest has been relatively under-examined.

Another important feature of magic tricks is their relatively strong, intuitive, and universal appeal. Because magic tricks are intended to create a strong violation of expectation, spectators are naturally motivated to understand why the expectation is violated ("why did this happen?"), which is thus likely to induce relatively strong feelings of epistemic emotions. An advantage of magic tricks is that they mainly consist of nonverbal information such as vanishing or appearing materials. This nonverbal nature of the stimuli makes it easier for people to intuitively understand the content. As a result, magic tricks can trigger epistemic emotions regardless of the participant’s language, educational and cultural backgrounds. Of course, we are not claiming that magic tricks are superior to existing stimuli to trigger epistemic emotions. There are some obvious limitations such as difficulties controlling stimulus length. However, given the unique advantages of magic tricks, we believe that the current stimulus set provides complementary benefits to the researchers studying epistemic emotions.

There are different ways to utilize MagicCATs in research, but one common context is an experiment in which researchers intend to elicit different levels of epistemic emotions on a trial-by-trial basis to examine the within-person correlates of epistemic emotions. We have already conducted one neuroimaging experiment using MagicCATs and validated the effectiveness of the stimuli. In Lau, Ozono, Kuratomi, Komiya, & Murayama (2020), we presented participants with a series of 36 magic trick video clips from MagicCATS to induce feeling of curiosity and asked them to decide whether they would be willing to risk receiving electric shocks to satisfy the curiosity to know the solution of the trick. Self-reported ratings of curiosity for each magic trick was significantly associated with the decision to accept the risk to receive electric shocks on a trial-by-trial basis, indicating that the magic trick videos successfully induced curiosity.

The collection of magic trick video clips we provide would also benefit the recent growing area called the “science of magic”, which investigates human cognitive mechanisms using magic tricks (for review, see Kuhn, Amlani, Rensink, 2008; Kuhn, 2019; Thomas, Didierjean, Maquestiaux, & Gygax, 2015). Broadly speaking, magic tricks can be classified by the three general methods used by magicians: misdirection, illusion and forcing (Kuhn et al., 2008). By embedding these types of magic tricks in psychological experiments, we can gain unique insight into cognitive processes. For example, the technique of misdirection, i.e., manipulating the spectator away from the cause of magic effect (Kuhn et al., 2008), is useful to investigate mechanisms of attention (e.g., Barnhart & Goldinger, 2014; Kuhn & Findlay, 2010; Kuhn & Land, 2006; Wiseman & Nakano, 2016). Also, many magic tricks are based on visual or cognitive illusions (e.g., Macknik, Martinez-Conde, & Blakeslee, 2010). Investigating how magicians use such illusions in practice may lead to new insights in perception and cognition (Ekroll, Sayim, and Wagemans, 2013). Furthermore, some magic tricks force the spectators to choose a certain object while the spectators believe that they made the choice out of their free will. Investigating how and why spectators have such false beliefs can lead to better understanding of human free will and agency (Kuhn, Pailhès, & Lan, 2020; Olson, Amlani, & Rensink, 2013; Ozono, 2017; Pailhès, & Kuhn, 2020). Our magic trick videos include misdirection, illusion, and forcing as well as other various trick mechanisms (e.g., tricks utilizing mathematical logic or physical principles), allowing researchers to pursue a variety of psychological research questions.

The current paper describes how we created MagicCATs, which aims to induce epistemic emotions in psychological experiments. We then provide rating data and perform quantitative analysis to examine the psychometric properties of MagicCATs. Specifically, we show that the magic tricks elicit a variety of epistemic emotions (surprise in response to the trick, interest in the trick, and curiosity in the solution) with sufficient inter-stimulus variability.


Creating MagicCATs

Four male magicians, including a champion of an international magic competition, performed 145 magic tricks in total. These performances were filmed in two studios (one studio for one magician and another studio for the other three magicians) by professional photographers using high resolution video cameras. The magicians selected magic tricks that would maximize the variety of materials (e.g., playing cards, coins, sponges etc.) and the types of the tricks (e.g., vanishing, transportation, prediction, etc.). Magicians also ensured that the magic tricks were heterogeneous enough to induce different levels of epistemic emotions (i.e., surprise, interest, and curiosity). As most previous research on curiosity and interest were comparing the responses to the stimuli that induce either a low or high-level of curiosity and interest (Fastrich et al., 2017; Gruber et al., 2014; Kang et al., 2009), it is important to have sufficient inter-stimulus variability (i.e., it is critical to include magic tricks that are relatively less surprising/interesting). See Appendix Table 6 for the details of the MagicCATs.

All videos were then edited using Adobe® Premiere Pro CC® (2015) software to have a similar monotonic (dark) background, size (720 x 404 pixels) and viewing focus. The videos were muted, and English subtitles were added in a few videos, when necessary. The face of the magician was obscured as much as possible to avoid potential distraction due to their appearance and facial expressions. This editing also helps minimize potential responses to the gender of the magicians reported by Gygax, Thomas, Didierjean, & Kuhn (2019). Twenty-one magic tricks out of the 145 tricks were relatively long and included a sequence of more than one tricks. For these videos, we created a short-version focusing on the first trick that was presented, in addition to the original long version, thus giving researchers more flexibility in stimulus selection (e.g., choosing stimuli that fit within a specific time constraint). In total, we created 166 video clips including the 21 long video clips of tricks that are accompanied by 21 short versions and 124 other videos. These videos ranged between 8 and 155 s long (mean = 37.3; median = 31; SD = 23.5). Without including the long versions, the videos raged between 8 and 105 s long (mean = 33.1; median = 30; SD = 19.8) . Here are the URLs of three video samples.,,

Rating task


A total of 495 participants took part in the rating task through Amazon Mechanical Turk. A total of 470 US participants were paid $3.5 for approximately 35 min of study participation, whist the first 25 participants were paid $2.5 before our re-calculation of the more realistic study completion time. Prior to the main data analysis, we excluded 44 participants who either (a) took more than +2 SD longer than average to complete the experiment (there is no participant who took shorter than 2SD below the average duration); (b) gave identical ratings on more than three questions for all trials; (c) answered "no" to the "clarity of the trick" question for all the presented video clips (see below); (d) indicated some issues in video presentation and internet connection in post-questions; or (e) indicated that they were distracted during the experiment or had already taken part in a similar experiment previously. This exclusion led to the final sample of 451 participants; 259 males and 192 females (mean age = 36.10, SD = 10.34, range = 20–71).

Stimulus lists

It was impractical for any single participant to view and rate all the 166 video clips in MagicCATs, particularly as there were some duplicated magic tricks due to the short- and long-version as described above. To address this, we split the 166 video clips into nine lists and only presented each participant video clips from two of the lists. Our design is an adopted version of a Balanced Incomplete Block (BIB) spiraling procedure in the literature of test theory (Fleiss, 1981; Hanani, 1961).

The nine lists (Lists 1–9) were created in the following manner. Lists 1 and 2 consisted of 21 tricks from the short and long versions of the video clips mentioned earlier. Short and long versions of the video clips were assigned evenly to these lists (i.e., List 1 included ten short- and 11 long-version video clips; List 2 included 11 short- and ten long-version video clips). Video clips from the same magic trick were not assigned to the same list, thus when List 1 included a short version of a magic trick, List 2 included a long version of the same magic trick. The other 124 video clips were assigned to the remaining 7 lists (List 3 to List 9), resulting in 17 or 18 video clips for each of the lists. When creating these lists, we attempted to minimize the difference in the total duration of the video clips between lists. Consequently, the total duration of the video clips in Lists 1 and 2 were 16–17 min and 9–10 min in Lists 3 to 9.


Participants were presented with video clips from two of the nine lists in a randomized order. The assignment of the lists was randomly determined with the constraint that participants were not presented both Lists 1 and 2 because they included video clips from the same magic tricks but of different lengths. For each video clip participants gave five different ratings: (a) whether they understood the intention of the magic trick or not (clarity of the trick); (b) how surprised they were at the magic trick (surprise in response to the trick); (c) how interesting the magic tricks were (interest in the trick); (d) how confident they were that they had figured out the solution to the trick (confidence in the solution); and (e) how curious they were about how the magic trick was done (curiosity of the solution). Clarity of the trick, which assessed whether participants understand what happened after seeing the video clip, was rated using a binary response scale (Yes/No); we included this rating to be used as an exclusion criterion. The other four questions were rated on 10-point Likert scales ranging from 1 (not at all) to 10 (very much).

At the end of the session, participants gave some post-experiment ratings. First, participants reported how much they were interested in the magic tricks in general (1 being not at all and 10 being very much) and whether they perform magic tricks themselves (1 being not at all, 2 being a little bit and 3 being frequently). Second, they reported any video presentation or internet connection problems that had occurred during the experiment; whether they were doing anything else during the experiment and whether they had already participated in another experiment with the same videos. We emphasized that their compensation would not be affected by their responses. As indicated above, these questions were used to exclude participants from the main data analysis who seemed to have been disengaged during the study.


Analysis on the post hoc questions showed that participants were generally interested in the magic tricks M = 7.13, SD = 2.05. Only six participants indicated that they frequently performed magic tricks; 379 participants indicated that they had never performed a magic trick. In the following, we did not use the data for Trick 9 because it mistakenly contained sounds (we still include this magic trick in the final stimulus set in Appendix Table 6).

Table 1 reports the descriptive statistics of the main variables (clarity of the trick, surprise in response to the trick, interest in the trick, confidence in the solution, and curiosity in the solution) with video clips as the unit of analysis. On average, participants understood the intention of the magic tricks the majority of the time (86.03%). When looking at the average proportion of participants who understood the magic trick for each video clip (Appendix Table 7), the majority of the video clips (156 out of the 165 video clips) were understood by more than 70% of participants. However, there were nine video clips that were understood by less than 70% of participants. In the following analysis, all the data of the trials in which the intentions of the magic tricks were unclear to participants were removed.

Table 1 Descriptive statistics for the 166 magic trick videos

Participants reported a moderate level of surprise in response to the trick, interest in the trick, and curiosity in the solution (M = 5.58, 5.70, and 5.71, SD = 0.81, 0.77, 0.72, respectively, on a 1–10 scale). These average rating values were consistent with the fact that we intended to include both surprising and less surprising magic tricks to ensure the heterogeneity of the stimulus set. The average confidence in the solution is relatively low (M = 4.14). Note that this is a subjective rating of confidence and we do not have objective data to demonstrate that participants indeed correctly guessed the solution behind some of the magic tricks.

To further examine whether our new stimuli can appropriately evaluate within-person variability of epistemic emotions, we applied a mixed-effects model to the data to decompose the three distinct variance components: participant variance, video variance, and participant x video variance. Participant variance represents overall individual differences between participants (i.e., some participants had relatively high experience of surprise compared to others across all of the video clips), whereas video variance represents differences in ratings between video clips (e.g., some video clips are more surprising than other video clips to all participants). Participant x video variance represents individual differences in participants' responses to different video clips (e.g., some participants were interested in a specific video clip whereas others were not). Note that participant x video variance also includes variance from measurement errors, which we cannot statistically separate. In most of the previous literature on epistemic emotions (e.g., Fayn et al., 2019; Fastrich et al., 2017; Vogl et al. 2019, 2019), within-person variability reflected both the stimulus variance and participant x stimulus variance (i.e., video variance and participant x video variance in this study).

The mixed model regression found that the majority of the variance can be explained by the participant x video clip for all the ratings (Table 2). The random effect of participant variance explains about 36–43% of the response variance. The random effect of video is the smallest, explaining about 5–7% of the variance. These results indicate that these materials have sufficient within-person variability (57–64%) to examine intra-individual fluctuation of these epistemic emotions. The findings are also largely consistent with the findings of Fastrich et al. (2017) who used trivia questions.

Table 2 Variance components of the ratings

One limitation of the variance decomposition analysis in Table 2 is that we cannot dissociate the participant x video variance from measurement error variance; it is possible that within-person variance that we reported is an overestimation. Although it is difficult to perfectly address the issue, to gain more insights into the rating data we conducted further mixed-effects modeling to further decompose the variance by treating ratings of epistemic emotions as an additional factor. More specifically, we regarded the data as three mode data of participant x videos x type of epistemic emotions (emotion type being either surprise in response to the trick, interest in the trick or curiosity in the solution), and conducted mixed-effects modeling to estimate the variance components of each factor and their interactions. We excluded confidence from the analysis because confidence was not assessed as an epistemic emotion. As emotion type variance was very small and caused a convergence error, we eliminated this term from the final model. Table 3 includes these results. Note that participant x video x emotion type variance is confounded with measurement error variance. One important finding from this analysis is that the large contribution of the participant x video clip observed in the original variance decomposition model (Table 2) is now largely absorbed into participant x video variance (not participant x video x emotion type variance), which is no longer confounded with error terms. This finding means that measurement error was not a major source of the participant x video clip variance observed in the original model. Another important observation is that emotion type explained a relatively small portion (20.1%) of the total variance (Video x emotion type + Participant x emotion type + Participant x video x emotion type, which includes measurement errors). These results suggest the possibility that participants did not make a very strong distinction between the three types of epistemic emotions.

Table 3 The variance components of each factor and their interactions

We also examined the extent to which these ratings can be explained by the individual difference variables we assessed (i.e., age, gender, general interest in magic tricks, and experience in performing the trick) by including these variable as additional fixed-effect predictors in the model. We also included a fixed-effect of trial number and its random slopes as an additional within-person predictor to explore the potential role of familiarization in epistemic emotions. Table 4 reports the results. Across epistemic emotions there is a consistent age effect, suggesting that older participants tended to have higher overall epistemic emotions, (βs = 0.02, ps < .01). On the other hand, confidence was negatively associated with age (β = – 0.02, p < .01). There were no statistically significant gender differences (– 0.20 ≤ βs ≤ 0.03, ps >.05). General interest in magic tricks is significantly and positively associated with epistemic emotions and confidence (0.16 ≤ βs ≤ 0.35, ps < .001). Experience with magic tricks has a strong positive association with confidence (β = 1.17, p < .001), and a weak positive relationship with interest (β = 0.38, p < .05). Finally, number of trials was negatively associated with epistemic emotions and confidence, indicating that there was a general declining trend for these ratings over trials (– 0.14 ≤ βs ≤ (– 0.08, ps < .001).

Table 4 Fixed effects (standard errors) predicting ratings of epistemic emotions in mixed-effects modeling

To further examine the differentiation of these epistemic emotions, we also computed the correlations between surprise in response to the trick, interest in the trick, confidence in the solution, and curiosity in the solution at the within-person level. More specifically, we calculated within-person correlations for each participant (using video clips as the unit of analysis) and then computed the mean and SD of the correlations across participants (please see Table 5). Surprise in response to the trick, interest in the trick, and curiosity in the solution were highly correlated; however, there are also considerable individual differences (i.e., SD is relatively high), indicating that these three epistemic emotions are overlapping but distinct concepts, especially for particular individuals (Fayn et al. 2019). The distributions of the within-person correlations were all unimodal but the distribution was also substantially skewed given the limit of correlation coefficients (– 1 ≤ r ≤ 1) and large individual differences. For completeness, we also computed a between-person correlation and have reported it in Appendix Table 8. As is typical with correlations of aggregated scores (Robinson, 1950), the between-person correlation between epistemic emotions is very high.

Table 5 Means of within-person correlations among ratings

Stimulus availability

The final set of MagicCATs video clips is available upon request for research purpose. The request procedure is posted to Open Science Framework ( along with the stimulus list (equivalent to Tables 6 and 7) and raw rating data.


The current article introduces 166 magic trick video clips (MagicCATs) as a novel stimulus set to induce epistemic emotions. MagicCATs include a variety of magic tricks with various lengths (8–155 s) and diverse materials (e.g., playing cards, coins, sponges etc.), making it easy for researchers to select and use the subset of stimuli best suited to their own research purposes. Furthermore, rating results showed sufficient within-person variance with moderate mean levels of epistemic emotions, meaning that these video clips are suitable to examine these emotions on a trial-by-trial basis. MagicCATs video clips are available for research purpose, and stimulus list and raw rating data are also available online (please go to

The mixed-effects modeling analysis indicated that there is substantial within-person variation of the ratings of epistemic emotions across different video clips. This is a useful property for an experimental stimulus set and these results demonstrated its capability to capture within-person variation of these emotions. However, the majority of this within-person variance came from participant x video clip effects, meaning that there are substantial individual differences in which magic tricks were found surprising, curious, and interesting by participants (see Fastrich et al., 2017 for a similar finding with trivia questions). Therefore, when examining epistemic emotions with these stimuli it may be ideal to assess these epistemic emotions on a participant-by-participant basis. Of course, video variance was still present, and therefore, it is possible for future studies to rely on the aggregated ratings reported in the Appendix Table 7 to compare, for example, response between high vs. low curiosity magic video clips. However, the results should be interpreted with caution given the potential for large individual differences. It is worth noting that we also observed large participant variance that is comparable to the participant x video variance (Table 3). This variance may reflect general response bias but suggest possible individual differences in the overall tendency for participants to experience epistemic emotions.

One important research question for the future is to identify the source of these large individual differences. As the first step, we explored some basic demographic variables that we assessed in this experiment. First, older participants tended to have higher overall epistemic emotions. These findings are consistent with previous studies using trivia questions (Fastrich et al., 2017; McGillivray et al., 2015) but somewhat contradictory with a declining trend in curiosity-related personality traits in older age groups (Sakaki, Yagi, & Murayama, 2018). On the other hand, confidence was negatively associated with age, which is also consistent with the previous study (Jay, 2016). Second, there were no statistically significant gender differences. Gygax et al. (2019) reported that males are more motivated to discover how tricks are done than females, which is inconsistent with our results for curiosity and confidence in solution. There are some methodological differences (e.g., the participants were asked to report the solution they had guessed in their study) and the future research would be necessary for clarity in what gender differences exists in which contexts. It is possible that other basic demographic variables such as culture can explain individual differences. Unfortunately, we only collected the data from US participants and could not analyze potential cultural differences. Because we provide all the experimental materials and programs online (code for the experiment is also available on request), we are hoping that those who are interested in the topic will conduct follow-up studies with different populations to examine the role of culture in the subjective experiences of epistemic emotions. Note that, using generalizability theory, our variance decomposition estimates (Table 2) allow researchers to determine the number of video clips to be used to reliably assess such individual differences (see Brennan, 2001 for formulas).

It is worth noting that the curiosity and interest ratings used in the current experiment may have slightly different meanings than those used in other studies on curiosity and interest. Specifically, curiosity ratings in the current experiment asked whether participants were curious about how the trick was done (i.e., curiosity in the solution), whereas interest ratings asked whether they were interested in the magic tricks themselves (i.e., interest in the trick). Curiosity in the solution focused on the subjective motivation to close the knowledge gap (Loewenstein, 1994), while interest in the trick focused more on the positive emotional feelings due to the apparent uncertainty and impossibility of the trick (Silvia, 2005). However, some other studies (e.g., Fastrich et al., 2017; McGillivray et al., 2015) operationalized the feeling of interest as the satisfying of curiosity (e.g., the positive feelings when seeing the answer of a trivia questions). Some other researchers, especially in the field of education, define interest more broadly in relation to learner's goals, values, and pre-existing knowledge (for a review, see Hidi & Renninger, 2019). Our labeling of curiosity and interest are rather ad hoc: we simply used the terms in a way that participants can intuitively understand the focus of these feelings, i.e., curiosity in the solution vs. interest in the magic trick. In fact, we are hesitant to commit to the debate over the exact definitions of curiosity and interest (for details on our view, see Murayama, Fitzgibbon, & Sakaki, 2019). However, researchers should bear in mind differences in the conceptualizations of curiosity and interest when interpreting the findings reported in the current article (see also Shin & Kim, 2019).

Some additional points are worth discussing. First, our findings from the rating analysis may have a limitation in that we relied on a single-item measure to assess epistemic emotions, which are expected to be less reliable than multiple-item measures. However, we believe that simple subjective emotional feelings such as surprise can be assessed reliably and validly with a single-item measure (see also Diamantopoulos et al., 2012), and in fact most of the previous studies used a single-item measure to assess such epistemic emotions and found meaningful relations with other variables (e.g., Kang et al., 2009; Vogl et al., 2019). Even so, we need to wait for an empirical investigation to understand the extent of the problem when using single-item measures to examine epistemic emotions. Second, although we focused on the emotions of curiosity, interest, and surprise, other types of emotions may be experienced when people see magic tricks. For example, Leddington (2016) argued that the heart of the experience of magic is a conflict between “intellectual belief (the magic is impossible)” and “emotional belief (the magic is actually happening).” Further research would be required to investigate such emotions as well. Third, we observed that the number of trials was negatively associated with epistemic emotions and confidence, indicating that there was a general declining trend for these ratings over trials, perhaps reflecting a familiarization effect. Researchers who use MagicCATs should take care of this declining trend when they decide how many tricks they use in their studies. Fourth, the variance decomposition analysis and within-person correlation suggest that the three types of epistemic emotions we assessed are substantially overlapping even if they also exhibited some unique variances. The high inter-correlation between epistemic emotions is consistent with previous studies and not surprising given that these epistemic emotions are likely to be the causes or consequences of one another (e.g., Vogl et al., 2019, 2019). As all the ratings were assessed soon after each other, response bias might have played some role too (Podsakoff et al., 2003). However, these findings also suggest the importance of assessing and including these emotions together in an empirical study when researchers are interested in examining unique aspects of each specific type of epistemic emotions. Finally, although we did our best to control for various aspects of the video clips (e.g., background, expression of magicians, etc.), there are notable differences between the magic tricks. For example, running times vary widely between the video clips, and there are some video clips which have subtitles and/or show a third person (e.g., a person to pick a card). These factors can be confounding in experimental work. However, these variations were necessary in order to ensure the generalizability of experimental findings from these stimuli. Researchers can easily control the differences between the videos by pre-screening the video clips according to the aims of studies. We are hoping to expand the collection further so that it is easier for researchers to select video clips that fit well with their research questions.