Empathy is a crucial part of social interactions. It enables people to understand and share another person’s subjective emotional state (Preston, 2007) and is widely believed to be a key factor in promoting prosocial behavior (Eisenberg, 2000; Morelli, Rameson, & Lieberman, 2014).

Hoffman (1984) developed a model combining the cognitive and emotional facets of empathy. In this model, the empathic arousal that is rooted in preverbal, innate emotional responses manifests in various forms as development evolves our understanding of others’ mental states. Preston and De Waal (2002) suggested an integrated model of empathy, which also consists of emotional and cognitive facets, but deals with the underlying processes. According to their perception-action model (PAM; Preston, 2007; Preston & De Waal, 2002), empathy is defined as

a shared emotional experience occurring when one person (the subject) comes to feel a similar emotion to another (the object) as a result of perceiving the other’s state. This process results from the fact that the subject’s representations of the emotional state are automatically activated when the subject pays attention to the emotional state of the object. (Preston, 2007, p.428)

The key proposal of PAM lies in the automatic process of empathy. For this, the attended percept of the object’s emotional state automatically activates the observer’s representation of a similar state with its associated autonomic and somatic responses, unless otherwise inhibited (Preston & De Waal, 2002, p. 4). According to the model, empathy may occur instantaneously and automatically when we pay attention to another person’s emotional state (Preston & De Waal, 2002).

Although PAM argues for the automaticity of empathy, previous studies have shown that empathy often is not initiated automatically (Preston, 2007; Singer & Lamm, 2009). Attention is necessary, however, because it is used to trigger the postulated cascade of events starting with emotional contagion and, ultimately, resulting in the full empathic experience (Singer & Lamm, 2009).

People have limited attentional resources, so that it becomes necessary to allocate these selectively to process information valuable for behavior. Therefore, one may conjecture that people have difficulty attending to others’ emotional states when attending to other stimuli. For example, people’s feelings toward others in need decrease as a function of the number of the people in the scene because attentional capacity is exceeded (Slovic, 2007). Similarly, distracting attention decreased the activation in insular and cingulate cortices that was previously observed while watching someone else’s pain (Gu & Han, 2007). Hence, attentional resources seem to place a limit on empathy in general.

In addition, allocation of limited attentional resources also plays a key role in individual differences in empathy. Previous studies have reported considerable individual differences in empathic responses adopting various methods such as self-report (Davis, 1983; Mehrabian, 1997), facial electromyography (Dimberg, Andréasson, & Thunberg, 2011), heart rate (Oliveira-Silva & Gonçalves, 2011), and neuroimaging (Bernhardt, Klimecki, Leiberg, & Singer, 2014). This variability seems to increase with age. Infants readily respond toward the emotional states of their peers or caregivers (i.e., reactive crying; Thompson, 1998). As they grow up, they start to use attentional strategies to regulate their distress, which in turn modulates the duration of orientation toward a peer in pain (Geangu, Benga, Stahl, & Striano, 2011). Adults also shift their attention from empathy-inducing stimuli when they predict that it would lead to high-cost helping (Shaw, Batson, & Todd, 1994).

Taken together, this body of research suggests an important link between attentional susceptibility toward another person’s emotional state and individual differences in empathy. Although PAM recognizes the role of attention as such (Preston, 2007), its mechanism has not yet been fully clarified.

In this study, we use the attentional blink (AB) paradigm to study the relation of attention to individual variability in empathy. AB is particularly suited because it allows measuring of individual differences in attentional susceptibility. Attentional blink refers to the phenomenon that the identification of a first target (T1) transiently impairs the detection of a second target (T2) during rapid serial visual presentation (RSVP) of a stimulus stream (Shapiro, Raymond, & Arnell, 1997). As the stimulus-onset-asynchrony (SOA) between T1 and T2 is reduced, participants are often unable to report the second target (T2) accurately, even though the first one has been reported correctly (Shapiro et al., 1997).

Attention allocation is affected by endogenous factors such as tasks or personality traits on the one hand and exogenous factors such as visual salience or emotional valence on the other hand (Kastner & Ungerleider, 2000). It is interesting that exogenous factors such as emotional valence bias the competition for attentional resources are often detected more readily and effectively (Pessoa, Kastner, & Ungerleider, 2002). In the AB task, for example, emotionally salient T2 stimuli still emerge from the refractory period of AB (Anderson, 2005; Milders, Sahraie, Logan, & Donnellon, 2006). This effect is observed in word stimuli (i.e., Anderson, 2005) and photos of facial expressions (de Jong, Koster, van Wees, & Martens, 2009; Fox, Russo, & Georgiou, 2005; Milders et al., 2006). This implies that emotional T2 targets are preferentially processed even under limited attentional resources.

Important for our purpose, endogenous factors such as personality trait or cognitive mindset also affect the T2 processing performance in the AB paradigm (Fox et al., 2005; Romens, MacCoon, Abramson, & Pollak, 2011). For example, participants with high trait anxiety showed a lowered refractory effect toward fearful faces, but not happy faces (Fox et al., 2005). Another study examined the relationship between a negative cognitive mindset and the automatic allocation of attention to emotional information and reported similar individual differences: Participants with a negative mindset were more likely to detect negative attribution T2 words compared to participants with a low or moderately negative mindset (Romens et al., 2011).

Such differences in attentional blink performance may be the result of a general susceptibility toward emotional information. To verify that the observed differences are due to empathy-related privileged attention to another person’s emotional state, this study employed the AB paradigm using two kinds of emotional face stimuli: participants’ own faces and other faces. Because one’s own face has been reported to capture attention in various behavioral tasks (Tong & Nakayama, 1999) and has been associated with stronger neural correlates (i.e., masked priming; Pannese & Hirsch, 2011), facial expressions of oneself might induce an AB effect.

In sum, this study aimed to test whether empathic individuals have the tendency to automatically allocate attention toward others’ emotional expressions. To examine this hypothesis, we use a modified version of the attentional blink paradigm. Neutral and emotional faces of self and other are presented to two groups of participants identified as either low or high empathic, depending on their scores on a standardized scale of emotional empathy (Mehrabian, 1997). We measure the AB effect and hypothesize that individuals with high emotional empathy will detect emotional T2 targets more accurately compared to neutral ones when others’ faces are presented as the second target. The inclusion of both one’s own and other persons’ faces allows us to dissociate the general effect of emotional stimuli: If increased performance for T2 detection is observed only for others’ faces, it can be interpreted as the result of preferential allocation of attention to others’ facial expression of emotion, and not as a general effect of emotional stimuli.

Method

Participants

Participants were recruited among university students using an online application system. Based on the validation study of the Korean version of the BEES (Chung, 2012), volunteers with one standard deviation above the average BEES score were grouped into the high-empathy group, the ones with one standard deviation below average were grouped into the low-empathy group, and the remaining participants were grouped as the medium-empathy group. Among the total of 108 participants recruited, three were excluded due to low concordance (<50 %) during the validation process. Another five participants were excluded because their accuracy (see below) deviated more than two standard deviations from the mean performance (M = 88.23 %, SD = 10.98 %). For the main behavioral experiment, a total of 100 participants were included in the final analysis (high-empathy group: two males, 19 females, M age = 22.95 years, SD = 2.38; medium-empathy group: 20 males, 28 females, M age = 24.31 years, SD = 3.47; low-empathy group: 11 males, 20 females, M age = 24.06 years, SD = 3.22). The average BEES score for the high group was 73.62 (SD = 10.26), for the medium group 41.38 (SD = 11.92), and for the low group -3.58 (SD = 13.25). All participants reported no history of psychiatric illness and had normal or corrected-to-normal vision.

Apparatus

Participants were tested individually in a quiet and dark room. A Mac Pro (2.8 GHz) workstation presented stimulus events and recorded responses using the Psychophysics Toolbox (Version 3; Kleiner et al., 2007). The face stimuli were displayed on a 24-in. LCD monitor (Eizo, refresh rate = 120 Hz) and subtended a visual angle of 8° × 11.4°. Forced-choice responses to T1 and T2 were recorded using a RB-830 response pad (Cedrus, San Pedro, CA).

Questionnaires

Emotional empathy was assessed by the Balanced Emotional Empathy Scale (BEES; Mehrabian, 1997), which tests the vicarious experience of another person’s emotional experience. The BEES questionnaire consists of 30 items assessed on a 9-point Likert scale ranging from −4 (very strong disagreement) to +4 (very strong agreement). Higher scores represent higher levels of emotional empathy. The scale was translated and validated for a Korean population (Chung, 2012). The initial study reported Cronbach’s alpha = .87 (Mehrabian, 1997), and .89 in the Korean translated version. Our data yielded Cronbach’s alpha = .94.

A Daily Helping Checklist (Morelli et al., 2014) was adopted as a postexperiment questionnaire. The items were translated into Korean and carefully revised with two bilingual researchers. The checklist of daily helping behaviors was sent to each participant at 11 p.m. via e-mail or SMS message for 7 days. Participants were instructed to complete the survey immediately before going to bed at night. The checklist includes 11 yes or no questions about daily helping behaviors toward both strangers and acquaintances. No (coded as 0) indicates the event did not occur, whereas a yes was coded as a 1. Scores for each day were calculated by summing the responses to each of the 11 items. A mean daily score was then calculated by averaging the total score across the 7 days. Because this measure was a count of experiences, it was not appropriate to calculate alpha coefficients.

Stimuli

Stimuli consisted of three types: pictures of other faces (Other-face), pictures of own faces (Self-face), and phase-scrambled distractors. Other-face stimuli were selected from the Korean Facial Expressions of Emotion (KOFEE) database (Park et al., 2011). The KOFEE comprises still images of seven facial expressions (happiness, disgust, anger, sadness, surprise, fear, and contempt) and neutral faces of Korean models. The models were trained to contract and relax different facial muscles associated—so-called action units (AUs)with each emotion based on the Facial Action Coding System (FACS; Ekman & Friesen, 1978). All facial expressions of KOFEE were coded and validated through subsequent FACS coding. We used emotional (happiness and sadness) and neutral stimuli of six models (DB, DY, EW, JM, SR, YJ; three males and three females, total of 24 stimuli) from the database. Neutral faces of selected models were adopted as T1 stimuli, and neutral and emotional faces of the same models were adopted as T2 stimuli. Phase-scrambled images of neutral faces were used as distractors.

Self-face stimuli were obtained from each participant in a similar protocol as for the KOFEE database. Participants were trained for a week based on the FACS (Ekman & Friesen, 1978) to be able to perform each AU properly. Following this protocol, participants were instructed to activate certain muscles rather than to pose certain emotional expressions. Among hundreds of photos, neutral, happy, and sad faces were screened. To control the quality of stimuli, a validation experiment was conducted (see below).

All stimuli were gray-scaled and their intensity histogram was matched to remove overall differences in brightness. Stimuli were masked by an oval shape to remove hair, neck, and background. T1 stimuli were tinted in red (changed to RGB color space and each pixel value of the red color channel multiplied by 2.25) in order to distinguish them from the T2 stimuli in the presentation stream (Müsch, Engel, & Schneider, 2012).

Twenty-six participants (12 males, M age = 23.3 years, SD = 2.5; 14 females, M age = 22.4 years, SD = 1.2) were recruited for the validation experiment. Stimuli consisted of the self-face stimuli (split into stimuli from the high-empathy and low-empathy groups) as well as other-face stimuli selected from the KOFEE database. The task consisted of a forced-choice question used to test categorization accuracy, as well as the Self-Assessment Manikin (SAM; Bradley & Lang, 1994) for ratings of valence and arousal. Recognition accuracies and ratings were then tested using Kruskal–Wallis tests. As shown in Table 1, we found no significant group differences between categorization, arousal, and valence for any of the measures. Hence, the self-face stimuli are comparable on both implicit and explicit tasks to the KOFEE stimuli.

Table 1 Result of Kruskal Wallis tests of facial expressions of each group

Procedure

Each trial of the main experiment using the attentional blink paradigm started with a 500-ms fixation period followed by a stream of 16 visual stimuli including scrambled distractors and target faces (see Fig. 1). Each stimulus was displayed for 67 ms (=8 frames) at the center of the monitor. The first face (T1) always had a neutral expression, whereas the expression of the second face (T2) was equally distributed in agency (other, self) and expression (neutral, happy, sad). All conditions were counterbalanced, and two targets never had the same identity in one trial. The temporal interval between T1 and T2 varied between lag 2 (67 ms between T1 and T2), lag 3 (134 ms), lag 5 (201 ms), and lag 7 (335 ms). After each trial, participants were requested to report the gender of T1 (male or female) and the emotion of T2 (neutral, happy, sad) by pressing the button dedicated to each response on the response pad. The button mapping was counterbalanced across participants.

Fig. 1
figure 1

Schematic of the attentional blink procedure

Data analysis

All data was analyzed with SAS (Version 9.4, SAS Institute Inc., Cary, NC). Following standard procedure in the analysis of RSVP data (Anderson, 2005), the T2|T1 accuracy (i.e., the T2 trials for which responses to T1 were correct) were submitted to a general linear mixed effect model using the PROC MIXED procedure. Agency (self or other), Emotion (neutral, happiness, or sadness), and Lag (2, 3, 5, or 7) were used as within-subjects factors. We excluded lag 1 because of well-known sparing effects (Potter, Staub, & O’Connor, 2002; Hommel & Akyürek, 2005). The self-reported empathy level (BEES score) was included as between-subjects fixed factor, and participants were included as a random effect. Repeated pairwise post hoc comparisons with Bonferroni correction were computed to test for differences between conditions.

To minimize individual differences in general task performance, an attentional blink index for emotional faces versus neutral faces was determined. This was defined as the difference in accuracy between emotional (happy and sad) and neutral conditions at each lag, as described below (see also Fox et al., 2005, for a similar definition). A positive index indicates that emotional stimuli promote attentional vigilance compared to neutral ones.

  1. (1)

    Attentional Blink IndexH-N = (Happiness–Neutral)

  2. (2)

    Attentional Blink IndexS-N = (Sadness–Neutral)

The attentional blink indices were also submitted to a general linear mixed model with the same factors above except for the Neutral condition. All effects with p < .05 were treated as statistically significant. Only significant interactions are reported in the following.

Results

The accuracy of T1 responses

For T1 responses, the mixed effect model revealed no group difference (p = .428), but a significant main effect of Lag, F(4, 388) = 773.87, p < .001, f 2 = 1.128. T1 accuracy increased as the lag increased, as found in previous studies (Milders et al., 2006; Müsch et al., 2012). Moreover, the main effects of Agency, F(1, 97) = 13.35, p < .001, f 2 = .004, and Emotion, F(2, 194) = 9.12, p < .001, f 2 = .001, were significant, with accuracy in the self-face condition higher than in the other-face condition (mean difference = 1.17 %, t = 3.65, p < .001). Compared to the neutral condition, both happy (mean difference = 1.48 %, t = 3.76, p < .001), and sad (mean difference = 1.42 %, t = 3.63, p < .001) conditions showed higher accuracies in T1 detection. The difference between happy and sad conditions was not significant (t = .13, p = .896). Both the two-way interaction between Agency × Lag, F(4, 388) = 3.5, p = .008, f 2 = .002, and the three-way interaction between Agency × Emotion × Group were significant, F(4, 194) = 2.68, p = .033, f 2 = .001 (see Fig. 2). The neutral condition tended to have lower accuracy in the self-face condition compared to the other-face condition for the high-empathy group. There was no other significant interaction between any other variable (p > .05 in all interactions).

Fig. 2
figure 2

Comparison of T1 accuracy across agency, emotion, and empathy

The accuracy of T2|T1 responses

The mixed effect model revealed a main effect of Emotion, F(2, 194) = 107.23, p < .001, f 2 = .173. Accuracy was higher in the happy condition compared to both neutral (t = 5.50, p < .001) and sad (t = 14.50, p < .001) conditions. Accuracy in the neutral condition was also higher than in the sad (t = 9.00, p < .001) condition. Importantly, the interaction between Emotion × Group was significant, F(4, 194) = 13.88, p < .001, f 2 = .020, as well as the three-way interaction between Agency × Emotion × Group, F(4, 194) = 3.39, p = 0.011, f 2 = .001 (see Fig. 3). The main effect of empathy group was not significant, F(2, 97) = 0.75, p = .477. The effect of Lag was significant, F(3, 291) = 3.78, p = .011, f 2 = .023, and the interaction between Emotion × Lag was also significant, F(6, 582) = 11.85, p < .001, f 2 = .023 (see Fig. 4).

Fig. 3
figure 3

Comparison of T2|T1 accuracy across agency, emotion, and empathy

Fig. 4
figure 4

Comparison of T2|T1 accuracy across agency, empathy, and lags

Pairwise post hoc comparisons for the highest three-way interaction revealed that participants with high BEES scores showed significantly better identification of the sad expression in the other-face condition compared to the low-empathy group (t = 5.46, p < .001) and the medium-empathy group (t = 4.06, p < .001). The medium-empathy group also showed higher accuracy compared to the low-empathy group (t = 2.09, p = .038). In the self-face condition, no such significant difference was observed.

As Fig. 4 shows, it also seems likely that both happy and neutral faces incurred a ceiling effect with performance already at high levels even for short lags.

The response time toward T1 stimuli

For T1 responses, the main effects of Lags, F(4, 388) = 126.35, p < .001, f 2 = .176, and Agency, F(1, 97) = 15.57, p < .001, f 2 = .002, were significant. The response times were longer for extended lags and in the other-face condition compared to the self-face condition (t = 3.95, p < .001).

The response time toward T2 stimuli

For T2|T1 stimuli, the main effects of Lags, F(3, 291) = 6.3, p < .001, f 2 = .008, and Agency, F(1, 97) = 8.37, p = .005, f 2 = .001, were also significant. Similarly to the T1 stimuli, response times in general increased as lags extended and responses times were longer towards other-face stimuli than toward self-face stimuli (t = 2.89, p = .005).

The T2 AB Index

The mixed effect model for T2 AB Indices revealed significant effects of Lags, F(3, 291) = 12.98, p < .001, f 2 = .037, and Emotion, F(1, 97) = 128.36, p < .001, f 2 = .134. The effect of group was also significant, F(2, 97) = 3.23, p = .044, f 2 = .011. In addition, there were significant interactions between Emotion × Lags, F(3, 291) = 10.13, p < .001, f 2 = .011, and Agency × Group, F(2, 97) = 3.92, p = .023, f 2 = .001. Both the high-empathy group (t = 3.11, p = .003) and the medium-empathy group (t = 2.31, p = .020) had a higher AB Index compared to the low-empathy group for the other-face condition, but not for the self-face condition (see Fig. 5).

Fig. 5
figure 5

Plot of the Attentional Blink Indices for each condition

The T2 AB Indices and self-report measurements of emotional empathy

The correlations between the Attentional Blink Index of each emotion (happy-neutral, sad-neutral) averaged across lags and the scores of self-report measurements were examined next. The Attentional Blink IndexS-N in the other-face condition correlated significantly with the BEES score (r = .445, p < .001, see Fig 6, lower row). The correlation between Attentional Blink IndexH-N and BEES was not significant, but was in the marginal range (r = .183, p = .068). No index correlated significantly with BEES in the self-face condition (happy-neutral: r = .043, p = .669, sad-neutral: r = .106, p = .295).

Fig. 6
figure 6

Correlations between Attentional Blink Indices for each emotion and BEES daily helping behavior

The T2 AB Indices and actual helping behaviors in daily life

To correlate the results in the AB task to actual prosocial behaviors, we next examined correlations between AB Indices (averaged across lags) and mean scores of the Daily Helping Checklist. For other-face indices, there were significant correlations in both happy and sad conditions (happy-neutral: r = .383, p < .001, sad-neutral: r = .391, p < .011; see Fig. 6, upper row), whereas, again, we observed no significant correlations for the self-face condition (happy-neutral: r = .021, p = .839, sad-neutral r = .019, p = .848).

Discussion

The results of this study indicate that individual differences in emotional empathy are reflected in increased attention allocation toward others’ emotional expressions under limited attentional resources. In this study, both emotional valence of the face and self-related information influenced the attentional blink effect as salient exogenous factors. Emotional T2 stimuli were detected more accurately than neutral stimuli, and self-face stimuli were processed preferentially compared to other-face stimuli for both T1 and T2 stimuli.

We showed that endogenous factors also play an important role with response accuracy toward others’ sad expression, varying according to the BEES score. Hence, our results reveal an effect of trait empathy as an endogenous factor interacting with stimulus features (Fox et al., 2005; Maclean & Arnell, 2010). Participants with high emotional empathy showed enhanced detection accuracy of T2 targets only toward sad faces of others compared to other conditions. Similarly, the Attentional Blink Index, which reflects the strength of the emotional AB effect, confirmed that individuals with high trait empathy show a significantly smaller refractory period toward another person’s sad emotion.

In addition, this study demonstrated a significant correlation between the attentional sensitivity towards others’ emotional expressions and the frequency of everyday prosocial behaviors. In agreement with the idea of the perception-action model of empathy, our results support the notion that attention allocation toward others’ emotional expressions might be the initial step of empathy, which in turn triggers an empathic cascade resulting in prosocial behavior.

Previous studies have indicated that accurate recognition of others’ emotional facial expressions is a prerequisite of empathic responding (i.e., Besel & Yuille, 2010), and have also reported a positive relationship between individual differences in facial expression recognition and self-reported emotional empathy (Besel & Yuille, 2010; Gery, Miljkovitch, Berthoz, & Soussignan, 2009). The current findings confirm and extend these studies. At the same time, our results indicate the importance of attention allocation happening in the early stage of emotion processing.

Two neuroimaging studies also have provided some evidence for a relationship between individual differences in emotional empathy and attentional characteristics. Rameson, Morelli, and Lieberman (2012) reported that the empathic response of individuals with high trait empathy does not change under high cognitive load condition, neither neurally nor behaviorally. In contrast, reduced neural responses in an empathy-related region (medial prefrontal cortex) were observed under cognitive load. An event-related potential (ERP) study also showed that an early component of empathic neural responses toward another’s pain that possibly related to emotional sharing was not affected by cognitive load, in contrast to later components that were decreased (Fan & Han, 2008).

It is interesting that the findings of this study are less consistent with results obtained by Kanske, Schönfelder, and Wessa (2013). They conducted an experiment measuring both behavior and ERP using an AB paradigm similar to our study. However, they reported that a correlation between the effect of emotion on AB and a self-report empathy scale (Interpersonal Reactivity Index [IRI]; Davis, 1983) was only observed in the P3 amplitude, but not in the behavioral performance. However, as the P3 component is regarded as being mainly related to stimulus access to memory and attention (Polich, 2007), the change in P3 may reflect only the difference in later cognitive processing of the stimuli. As the authors mentioned, there may be preceding negative components reflecting earlier, perceptual and emotional processing (i.e., N1 components; Ibáñez et al., 2011). Second, as the authors also mentioned, the total set of IRI consists of several orthogonal factors (Pulos, Elison, & Lennon, 2004). Using affective subscales (Empathic Concern & Personal Distress) of IRI like Yamada and colleagues did (Yamada & Decety, 2009), might actually show very different correlation patterns.

Typical emotional attentional blink (EAB) studies examine emotion-induced AB with emotionally loaded T1 stimuli to test the ability of these stimuli to rapidly capture attention (McHugo, Olatunji, & Zald, 2013). In this study, we adopted a variant using emotionally loaded T2 stimuli to examine the extent of a reduced AB toward certain stimulus categories under limited attentional resources. According to the operational definition of empathy adopted in our study, mere unconscious processing of others’ emotional state does not fully meet the criterion of empathy, even if it does induce congruent emotional states. This would be more similar to emotional contagion because of its lack of recognizing the source of their affective states. Hence, this study adopted a discrimination task rather than just reporting “detection” of T2 stimuli, therefore requiring explicit processing of one’s own or another person’s facial expression.

This study focused on only two basic emotions—happiness and sadness. Additionally, it seems that both happy and neutral conditions were more easily detected compared to the sad condition, which showed a robust effect. It is well known that happiness in general has excellent and fast recognizability compared to other negative expressions, such as sadness (see, e.g., Boucher & Carlson, 1980; Ekman & Friesen, 1971), a finding that also holds true for the AB paradigm (e.g., Miyazawa & Iwasaki, 2010; Srivastava & Srinivasan, 2010). It will be an interesting topic for future research to investigate whether the processing advantage for empathic people we observed only holds for sadness (or other negative expressions, such as anger or fear), or whether this also generalizes to positive expressions. Hence, further studies with a wider range of emotional inducers and potentially increased task difficulty are necessary to complete the picture drawn here.

Overall, the findings of this study indicate that privileged allocation of attention toward others impacts empathic processing and that this may constitute a critical difference between empathic people and less empathic ones. These findings may also have implications in a clinical context, where they could be used to understand the role of attentional abnormalities in dysfunction of empathy (i.e., autism) or uncontrollable and maladaptive empathic responses (i.e., Williams syndrome).