Abstract
Facial expressions play an essential role in social interactions. Databases of face images have furnished theories of emotion perception, as well as having applications in other disciplines such as facial recognition technology. However, the faces of many ethnicities remain largely underrepresented in the existing face databases, which can impact the generalizability of the theories and technologies developed based on them. Here, we present the first survey-validated database of Iranian faces. It consists of 248 images from 40 Iranian individuals portraying six emotional expressions—anger, sadness, fear, disgust, happiness, and surprise—as well as the neutral state. The photos were taken in a studio setting, following the common scenarios of emotion induction, and controlling for conditions of lighting, camera setup, and the model's head posture. An evaluation survey confirmed high agreement between the models’ intended expressions and the raters’ perception of them. The database is freely available online for academic research purposes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The importance of faces as visual stimuli has been demonstrated across multiple disciplines such as social, behavioral, and neural sciences. Facial expressions carry valuable social information about a person's emotional state and intentions (Emery, 2000; Penton-Voak et al., 2006; Tomasello et al., 2007). Developmental studies have reported preferential attention to face-like stimuli in newborn babies (Goren et al., 1975; Johnson et al., 1991; Grossmann & Johnson, 2007), and neurophysiological studies have provided evidence for the existence of specialized brain areas and pathways for processing faces (Kanwisher et al., 1997; Vuilleumier & Pourtois, 2007; Schindler & Bublatzky, 2020; Calder & Young, 2005; Morris et al, 1998).
Given the significance of faces to humans, they are regularly used as stimuli in a variety of scientific disciplines related to human face perception (Coin & Tiberghien, 1997) including emotion recognition (Durand et al., 2007; Marsh et al., 2012), facial recognition by computers (Huang et al., 2005; Balla & Jadhao, 2018; Chellappa et al., 2010), neuropsychological disorders (Harms et al., 2010; Turetsky et al., 2007; Bornstein & Kidron, 1959), and the effect of emotion on cognitive processes (Adolphs et al., 2002; Nelson et al., 2003; Breiter et al., 1996).
While there is agreement that facial movements are informative for inferring emotions (Ekman & Friesen, 1976), the existence of a precise mapping between configurations of facial movements and emotion categories generalizable to all cultures remains strongly debated (Nelson & Russell, 2013; Durán & Fernández-Dols, 2021; Barrett et al., 2019). Most of the existing emotional face stimulus sets have been developed in Western societies and a few countries in East and south Asia (The Database of Faces, Cambridge, 2001; the EU-emotion stimulus set, O’Reilly et al., 2016; Tsinghua facial expression database, Yang et al., 2020; the Chicago Face Database, Ma Ds Correll & Wittenbrink, 2015; The MPI facial expression database, Kaulard et al., 2012; Radboud Faces Database, Langner et al., 2010; Developmental Emotional Faces Stimulus Set, Meuwissen et al., 2017, Japanese Female Facial Expression (JAFFE) Database, Lyons et al., 2017) see Diconne et al., 2006, and Calistra, 2015, for a more comprehensive list), reflecting their ethnic composition and the potential effects of their culture on their emotional expressions (Jack et al., 2012). Hence, there remains a gap in the existing scientific data for representing other countries and cultures.
To the best of our knowledge, two databases of Iranian faces have been previously developed. The Iranian Face Database (IFDB) (Bastanfard et al., 2007) was the first database of Middle Eastern faces, developed with a strength in covering a wide range of ages and poses. It includes 3600 color images from 616 human faces at different ages between 2 and 85 years. However, it only features two emotional expressions—smile and frown—and is only available for a fee.
The Iranian Kinect Face Database (IKFDB) (Mousavi & Mirinezhad, 2021) was published recently as the first dynamic RGB-D database of Middle Eastern faces. It consists of more than 100,000 color and depth frames recorded by the Kinect V2 sensor from 40 subjects in different head positions portraying the six basic facial expressions plus four micro-expressions, all with external features and a close-up view.
Both datasets were developed with a focus on computer vision applications and do not provide a validation study, making them less suitable for use as stimuli for psychophysical experiments. Furthermore, neither of the databases is currently accessible free of charge. Along with the Bogazici face database from Turkey (Saribay et al., 2018), we contribute the only validated databases from Middle Eastern faces.
The Iranian Emotional Face Database (IEFDB) has been created to address the need for a database of standard and validated Iranian faces for related studies. It consists of 248 photos from 40 individuals’ faces, covering six different emotional states—anger, fear, happiness, surprise, sadness, disgust—as well as the neutral state. All photos were taken in consistent conditions of lighting, camera setup, head and eye position, and with a high resolution. The collected images were validated through an online survey completed by Iranian raters. The database is freely available for academic use (see Section 6 on data availability).
Methods
Development of database
Face models
Forty native Iranians (15 female) in the age range of 18–35 years (mean = 26.50, SD = 4.82) volunteered to participate as the face models for the database. The ethnicity of the models is as follows: Persian, 18 (45%); Azerbaijani, 11 (27.5%); Kurd, 6 (15%); Gilak and Mazanderani, 4 (10%); Lur and Bakhtiari, 1 (2.5%). Metadata on the model’s age, sex, and ethnicity for each image is available upon request. The volunteers were all students, researchers, or faculty members of the Tehran University of Medical Sciences who were notified about the face database by an online announcement. The models were fully informed about the experimental procedure and signed a written informed consent to have their photographs taken and published for scientific research purposes (e.g. scientific experiments, publications, and presentations). The study was approved by the local ethics committee at the School of Advanced Technologies of Medicine at Tehran University of Medical Sciences.
Image acquisition
The photos were shot in a room specifically equipped as a studio in the core research facilities of Tehran University of Medical Sciences. The shooting setup, including the camera settings, lighting, and room temperature, were controlled and consistent across all sessions (see Fig. 1). The camera (Canon EOS 650D) used an 18–35 mm lens to take high-resolution images (5184 × 3456 pixels) in portrait mode from the face models. A 3 × 2 m green screen photo studio backdrop was used for the background. For lighting, we used two professional spotlight devices (1000 W), placed behind the camera within 1.5 m from the subject (see Fig. 2a). Spotlights were softened by blue polarizer film sheets and a transparent sheet. The model sat on a comfortable chair with a fixed head and neck position and was asked to look directly at the camera at the time of shooting. A 15.6-inch Lenovo laptop screen (Lenovo Inc., Beijing, China), located right below the camera, was used to show videos and images for eliciting emotions. The models were required to wear no makeup, jewelry, or accessories such as glasses or piercings, though some still used sunscreen or moisturizing cream. All female models wore a head covering (hijab) as mandated by the law of the country.
Each session started with explaining the purpose and the content of the study to the models and obtaining participation consent. Next, the emotional expressions were shot one by one in the following order: neutral, happiness, disgust, sadness, anger, fear, and surprise, taking 30–40 minutes in total. Photos with a poor head angle and blurry photos were discarded at the shooting time. To elicit emotions in the models, we used personal event induction and scenario induction as used in previous studies (Ebner et al., 2010; Dalrymple et al., 2013). For personal event induction, the models were asked to recall an event from their own life which strongly elicited the target emotion. For scenario induction, they were shown visual stimuli intended to elicit the target emotion and/or asked to imagine themselves in specific circumstances which would elicit the target emotion. The models were encouraged to show the emotions in their face intensely but naturally. See Supplementary Materials for more details about the content of the instructions and scenarios.
Database validation
Approach
To validate the database, we designed an online evaluation survey for human raters to rate their perceived intensity for each candidate expression in each photo (see Fig. 3). The survey was implemented using JsPsych (de Leeuw & Motz, 2016) and remains available on the database website both in Farsi and English (Heydari & Yoonessi, 2019). The model photos were presented in the original form (without cropping external features like hijab).
Procedure and raters
In each survey attempt, the database photos were presented to the rater one by one in a random order. For each photo, the rater was asked to score the intensity level of each candidate emotion in the model’s face (see Fig. 3). The levels are based on the Likert scale (Likert, 1932).
Ratings were collected at two phases. In the first phase, the raters took a shorter version of the survey (to facilitate rater recruitment) with only five images to rate, randomly selected from the database. At the end of this phase, we had ~2500 rating records which we used to identify ambiguous stimuli: the images for which the target expression received an intensity rating less than any other expression on average (32 out of 280) were excluded from further ratings and analyses. Similar criteria have been used by other dataset creators (e.g. Yang et al., 2020; Ebner, Riediger, & Lindenberger, 2010). At the second phase of ratings, the additional dimensions of attractiveness, valence, and genuineness were added to the survey, which was then taken by 11 raters who each rated all 248 remaining images. Overall, close to 5300 rating records were collected, with most images rated at least 20 times. All raters received a short, written description of the survey content prior to the start of the rating.
Validation results
Expression identification
We observed a significant effect of expression for all categories. Figure 4 shows the distribution of the ratings, grouped by the model expression. A Welch's unequal variances t-test was performed for each pairing of the target expression with the other expressions (five tests). Bonferroni correction was applied to adjust p-values for multiple tests. All tests showed significant differences, with p < 1e−5 for all comparisons.
To report comparable measures with other studies, we also computed a confusion matrix to show the ratio of correct expression identification by category, as well as the biases in misidentifications (see Fig. 5). As our survey allows assigning intensities to more than one expression, for each rating record we first assigned the expression with the largest reported intensity as the chosen expression. For records in which all expressions were rated less than Fair, the chosen expression was assigned as Neutral. As shown in Fig. 5, Happiness had the highest hit rate (97%), and Fear and Disgust had the lowest (67% and 79%, respectively). The most common type of miscategorization was for the images in the Fear category being categorized as Surprise (22%). The overall Cohen’s kappa coefficient (Cohen, 1960) for the agreement between the model expressions and the rater-perceived expressions was 0.79.
Other variables
We also collected ratings for attractiveness, valence, and genuineness for each image from a smaller number of raters (N = 11) and computed the Spearman’s rank correlation (ρ) between them and the perceived intensity of the expressions. As expected, valence was strongly correlated with happiness (ρ = 0.56, 95% CI [0.53, 0.59]), and weakly anti-correlated with anger (ρ = −0.10, 95% CI [−0.14, −0.06]). Attractiveness was correlated with happiness (ρ = 0.27, 95% CI [0.23, 0.30]). Valence and Attractiveness were also correlated with each other (ρ = 0.65, 95% CI [0.62, 0.67]). Genuineness was generally correlated with the intensity of any emotion, but most strongly with happiness (ρ = 0.26, 95% CI [0.22, 0.29]). See Supplementary Materials for the correlation between all pairings of variables.
Discussion
In this study, we collected and validated a database of basic emotional expressions from 40 Iranian male and female faces. The goal was to provide high-quality images that capture all six basic emotional expressions in a controlled studio setting for future researchers. The database screening and validation were conducted through an online survey with select-all-that-apply choices where the intensity of each emotion was independently reported for each image. Most images have been rated at least 20 times in terms of perceived emotional intensity.
Our database is the only existing Iranian face database that provides a validation study. Our images were validated by Iranian raters to reduce the cross-cultural confusion effects (Elfenbein & Ambady, 2002). It is also the only one with a well-controlled and consistent shooting setup which is also available free of charge. The shooting setup is highly standardized, controlling for the camera setup, lighting conditions, background, and the subject’s head position. The images are in high resolution (5184 × 3456 pixels).
Compared to the common forced-choice survey paradigm, the select-all-that-apply format of our survey allows the raters to report the intensity of more than one emotion for each image, providing a more nuanced description (as was used to produce Fig. 4). This may be especially important when examining whether the findings from datasets of other ethnicities generalize to Iranian faces. Furthermore, it also lends itself more naturally to many computer vision algorithms which output a membership degree to each emotion per image.
The expression identification results (Figs. 4 and 5) show high agreement between the model’s intended expression and the rater-perceived expression for nearly all emotion categories. Notably, many images intended to show fear were reported to have a fair or high level of surprise. The confusion between fear and surprise is a well-known effect and has been previously explored by researchers. According to the perceptual-attentional limitation hypothesis, the confusion between fear and surprise might be caused by the visual similarity of the two expressions and the shared facial muscles involved in the movements (Roy-Charland et al., 2014; Chamberland et al., 2017; Zhao et al. 2017). Ekman (1993) specified the action units of fear to include those of surprise, and more. This may explain why in our data, more fear images are rated as surprise than vice versa.
Ratings were collected in two phases, where the first phase had a short-form survey with only five images included. We found the interrater reliability to be lower for the short survey (average intraclass correlation ICC(1,1) = 0.42 as compared to 0.58 for the full format) (Liljequist et al., 2019), but the effect of the model's expression remained highly significant when computed only using the short survey's subset of the rating data. We also did not collect metadata including name, age, ethnicity, etc., from the online raters. New surveys based on the database may include the above metadata as well as the evaluation of additional measures such as perceived age.
In our database of Iranian models rated by Iranian raters, the six basic emotions of happiness, sadness, anger, disgust, fear, and surprise remain largely identifiable, with lower identifiability for fear and disgust, which may have culture-specific underlying reasons (Jack et al., 2009; Jack et al., 2012). Rigorous testing of such effects and other hypotheses on emotion perception was not a focus of this study. To connect to the broader research in the cross-cultural debate on facial emotion expression, a natural next step is a more in-depth characterization of the differences in expression and judgment between Iranians and other ethnicities.
Data availability
The database is freely accessible via the database website (Heydari & Yoonessi, 2019) and the Open Science Framework (OSF) (Heydari, 2020)Footnote 1. The available items on the website include: the original version of the images; a cropped version of the same photos, i.e. without external features such as the neck, hair, and head covering; the 45 images which were excluded during post-processing placed in a separate category of “mixed” emotions; the metadata for each image including the intended expression, model ethnicity, and age; the aggregated survey rating. The high-resolution version of the photos is also available upon researchers’ request. The evaluation survey remains online for reference as well as for adding new ratings in the future. The existing rating data is available upon request.
References
Adolphs, R., Baron-Cohen, S., & Tranel, D. (2002). Impaired recognition of social emotions following amygdala damage. Journal of cognitive neuroscience, 14(8), 1264–1274.
Balla, P. B., & Jadhao, K. (2018). IoT based facial recognition security system. Paper presented at the 2018 International Conference on Smart City and Emerging Technology (ICSCET).
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 1–68.
Bastanfard, A., Nik, M. A., & Dehshibi, M. M. (2007). Iranian face database with age, pose and expression. Paper presented at the 2007 International Conference on Machine Vision.
Bornstein, B., & Kidron, D. (1959). Prosopagnosia. Journal of Neurology, Neurosurgery & Psychiatry, 22(2), 124–131.
Breiter, H. C., Etcoff, N. L., Whalen, P. J., Kennedy, W. A., Rauch, S. L., Buckner, R. L., ... Rosen, B. R. (1996). Response and habituation of the human amygdala during visual processing of facial expression. Neuron, 17(5), 875-887.
Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6(8), 641–651.
Calistra, C. (2015). 60 Facial Recognition Databases. Retrieved from https://www.kairos.com/blog/60-facial-recognition-databases on Sep 15, 2021.
Cambridge, A. T. L. (2001). The ORL Database of Faces. Retrieved from http://cam-orl.co.uk/facedatabase.html. Retrieved 10 september 2020, from University of Cambridge http://cam-orl.co.uk/facedatabase.html
Chamberland, J., Roy-Charland, A., Perron, M., & Dickinson, J. (2017). Distinction between fear and surprise: an interpretation-independent test of the perceptual-attentional limitation hypothesis. Social Neuroscience, 12(6), 751–768.
Chellappa, R., Sinha, P., & Phillips, P. J. (2010). Face recognition by computers and humans. Computer, 43(2), 46–55.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.
Coin, C., & Tiberghien, G. (1997). Encoding activity and face recognition. Memory, 5(5), 545–568.
Dalrymple, K. A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children’s Faces: Acquisition and validation of a new face stimulus set. PloS one, 8(11), e79131.
de Leeuw, J. R., & Motz, B. A. (2016). Psychophysics in a Web browser? Comparing response times collected with JavaScript and Psychophysics Toolbox in a visual search task. Behavior Research Methods, 48(1), 1–12.
Diconne, K., Kountouriotis, G. K., Paltoglou, A. E., Parker, A., & Hostler, T. J. (2006). Presenting KAPODI–The Searchable Database of Emotional Stimuli Sets. Emotion Review, 17540739211072803.
Durán, J. I., & Fernández-Dols, J. M. (2021). Do emotions result in their predicted facial expressions? A meta-analysis of studies on the co-occurrence of expression and emotion. Emotion 21(7):1550–1569. https://doi.org/10.1037/emo0001015
Durand, K., Gallay, M., Seigneuric, A., Robichon, F., & Baudouin, J.-Y. (2007). The development of facial emotion recognition: The role of configural information. Journal of Experimental Child Psychology, 97(1), 14–27.
Ebner, N. C., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior Research Methods, 42(1), 351–362.
Ekman, P. (1993). Facial expression and emotion. American Psychologis, 48, 384–392. https://doi.org/10.1037/0003-066X.48.4.384
Ekman, P., & Friesen, W. V. (1976). Measuring facial movement. Environmental Psychology and Nonverbal Behavior, 1(1), 56–75.
Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of emotion recognition: a meta-analysis. Psychological Bulletin, 128(2), 203.
Emery, N. J. (2000). The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24(6), 581–604.
Goren, C. C., Sarty, M., & Wu, P. Y. (1975). Visual following and pattern discrimination of face-like stimuli by newborn infants. Pediatrics, 56(4), 544–549.
Grossmann, T., & Johnson, M. H. (2007). The development of the social brain in human infancy. European Journal of Neuroscience, 25(4), 909–919.
Harms, M. B., Martin, A., & Wallace, G. L. (2010). Facial emotion recognition in autism spectrum disorders: a review of behavioral and neuroimaging studies. Neuropsychology Review, 20(3), 290–322.
Heydari, F. (2020). Emotional face database. Retrieved from osf.io/a6e2u. from Open Science Foundation osf.io/a6e2u
Heydari, F., & Yoonessi A. (2019). Emotional face database. Retrieved from http://e-face.ir/. Available from Faeze Heydari Emotional face database Retrieved 1 September 2020 http://e-face.ir/
Huang, T., Xiong, Z., & Zhang, Z. (2005). Face recognition applications. In Handbook of Face Recognition (pp. 371–390). Springer.
Jack, R. E., Blais, C., Scheepers, C., Schyns, P. G., & Caldara, R. (2009). Cultural confusions show that facial expressions are not universal. Current Biology, 19(18), 1543–1548.
Jack, R. E., Garrod, O. G., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences, 109(19), 7241–7244.
Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns' preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1–19.
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17(11), 4302–4311.
Kaulard, K., Cunningham, D. W., Bülthoff, H. H., & Wallraven, C. (2012). The MPI facial expression database—a validated database of emotional and conversational facial expressions. PloS one, 7(3), e32321.
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H., Hawk, S. T., & Van Knippenberg, A. D. (2010). Presentation and validation of the Radboud Faces Database. Cognition and emotion, 24(8), 1377–1388.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22(140), 55.
Liljequist, D., Elfving, B., & Skavberg Roaldsen, K. (2019). Intraclass correlation–A discussion and demonstration of basic features. PloS one, 14(7), e0219854.
Lyons, M., Kamachi, M., Gyoba, J. (2017). Japanese Female Facial Expression (JAFFE) Database. figshare. Journal Contribution. https://doi.org/10.6084/m9.figshare.5245003.v2
Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122–1135.
Marsh, P. J., Luckett, G., Russell, T., Coltheart, M., & Green, M. J. (2012). Effects of facial emotion recognition remediation on visual scanning of novel face stimuli. Schizophrenia Research, 141(2-3), 234–240.
Meuwissen, A. S., Anderson, J. E., & Zelazo, P. D. (2017). The creation and validation of the developmental Emotional Faces Stimulus Set. Behavior Research Methods, 49(3), 960–966.
Morris, J. S., Friston, K. J., Büchel, C., Frith, C. D., Young, A. W., Calder, A. J., & Dolan, R. J. (1998). A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain: A Journal of Neurology, 121(1), 47–57.
Mousavi, S. M. H., & Mirinezhad, S. Y. (2021). Iranian kinect face database (IKFDB): a color-depth based face database collected by kinect v. 2 sensor. SN. Applied Sciences, 3(1), 1–17.
Nelson, N. L., & Russell, J. A. (2013). Universality revisited. Emotion Review, 5(1), 8–15.
Nelson, E. E., McClure, E. B., Monk, C. S., Zarahn, E., Leibenluft, E., Pine, D. S., & Ernst, M. (2003). Developmental differences in neuronal engagement during implicit encoding of emotional faces: An event-related fMRI study. Journal of Child Psychology and Psychiatry, 44(7), 1015–1024.
O’Reilly, H., Pigat, D., Fridenson, S., Berggren, S., Tal, S., Golan, O., et al. (2016). The EU-emotion stimulus set: a validation study. Behavior Research Methods, 48(2), 567–576.
Penton-Voak, I. S., Pound, N., Little, A. C., & Perrett, D. I. (2006). Personality judgments from natural and composite facial images: More evidence for a “kernel of truth” in social perception. Social Cognition, 24(5), 607–640.
Roy-Charland, A., Perron, M., Beaudry, O., & Eady, K. (2014). Confusion of fear and surprise: A test of the perceptual-attentional limitation hypothesis with eye movement monitoring. Cognition and Emotion, 28(7), 1214–1222.
Saribay, S. A., Biten, A. F., Meral, E. O., Aldan, P., Třebický, V., & Kleisner, K. (2018). The Bogazici face database: Standardized photographs of Turkish faces with supporting materials. PloS one, 13(2), e0192018.
Schindler, S., & Bublatzky, F. (2020). Attention and emotion: An integrative review of emotional face processing as a function of attention. Cortex, 130, 362–386.
Tomasello, M., Hare, B., Lehmann, H., & Call, J. (2007). Reliance on head versus eyes in the gaze following of great apes and human infants: the cooperative eye hypothesis. Journal of Human Evolution, 52(3), 314–320.
Turetsky, B. I., Kohler, C. G., Indersmitten, T., Bhati, M. T., Charbonnier, D., & Gur, R. C. (2007). Facial emotion recognition in schizophrenia: when and why does it go awry? Schizophrenia Research, 94(1-3), 253–263.
Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia, 45(1), 174–194.
Yang, T., Yang, Z., Xu, G., Gao, D., Zhang, Z., Wang, H., ... Sun, P. (2020). Tsinghua facial expression database–A database of facial expressions in Chinese young and older women and men: Development and validation. PloS one, 15(4), e0231304.
Zhao, K., Zhao, J., Zhang, M., Cui, Q., & Fu, X. (2017). Neural responses to rapid facial expressions of fear and surprise. Frontiers in Psychology, 8, 761.
Acknowledgments
The shooting studio for this study was provided by the Core Research Facilities at Tehran University of Medical Sciences. We are grateful to the members of Neurocognitive Computing Lab at Tehran University of Medical Sciences for participating in our photo shooting process, specifically Ms. Setareh Farrokhi for letting us use her face images on the cover of our website and paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Heydari, F., Sheybani, S. & Yoonessi, A. Iranian emotional face database: Acquisition and validation of a stimulus set of basic facial expressions. Behav Res 55, 143–150 (2023). https://doi.org/10.3758/s13428-022-01812-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13428-022-01812-9