Journal of Autism and Developmental Disorders

, Volume 49, Issue 1, pp 294–306 | Cite as

Signing with the Face: Emotional Expression in Narrative Production in Deaf Children with Autism Spectrum Disorder

  • Tanya Denmark
  • Joanna Atkinson
  • Ruth Campbell
  • John Swettenham
Open Access
Original Paper


This study examined facial expressions produced during a British Sign Language (BSL) narrative task (Herman et al., International Journal of Language and Communication Disorders 49(3):343–353, 2014) by typically developing deaf children and deaf children with autism spectrum disorder. The children produced BSL versions of a video story in which two children are seen to enact a language-free scenario where one tricks the other. This task encourages elicitation of facial acts signalling intention and emotion, since the protagonists showed a range of such expressions during the events portrayed. Results showed that typically developing deaf children produced facial expressions which closely aligned with native adult signers’ BSL narrative versions of the task. Children with ASD produced fewer targeted expressions and showed qualitative differences in the facial actions that they produced.


Deaf Autism British Sign Language Emotion Narrative 


Deaf people use facial expressions while they are using sign language to express their own emotions or to describe the emotions of others, through the use of the same range of emotional facial expressions used naturally by the general population e.g. happiness, anger, sadness etc. (Carminati and Knoeferle 2013). They also use facial actions which provide sign language prosody, which function in sign language like intonation in spoken languages. For hearing speakers, prosody is carried in the vocal channel through patterns of stress, rhythm and intonation, while for deaf sign language users, prosody is conveyed while signing through an extensive range of prosodic facial acts conducted in synchrony with movements and holds produced by the hands (Dachovsky and Sandler 2009). Sign language prosody functions include lengthening effects, as well as lower face behaviours, eyeblinks and torso leans (Brentari and Crossley 2002). In addition, certain facial actions are also considered to be integral to providing phonological, lexical, syntactic and discourse features in sign language (e.g. Sutton-Spence and Woll 1999; Stokoe 2005; Elliott and Jacobs 2013). Neuropsychological studies of deaf signers with discrete right and left hemisphere lesions (Corina et al. 1999; MacSweeney et al. 2008) have demonstrated a dissociation between linguistic and non-linguistic uses of the face with linguistic functions being localized to the left hemisphere and affective functions being mediated by the right hemisphere.

Deaf signers have been shown to attend to faces to a greater extent than hearing people (Agrafiotis et al. 2006; Emmorey et al. 2009; Mitchell et al. 2013; Megreya and Bindemann 2017) and show face processing advantages relative to hearing non-signers (Bellugi et al. 1990; Bettger et al. 1997). It is plausible that one of the sources of this advantage relates to deafness itself, reflecting greater dependence on the visual channel for communication. Thus, deaf people attend more closely to facial actions which serve communication, whether or not they use sign language (see, for instance, Hauthal et al. 2012). However, since facial actions have developed within sign languages to a very marked extent, subserving linguistic as well as communicative functions (Campbell et al. 1999; Thompson et al. 2009), sign language exposure may play a further role in deaf people’s face processing abilities.

Research with hearing children diagnosed with autism spectrum disorder (ASD) has investigated both comprehension and production of emotional facial expressions. The majority of these studies have focused on emotion recognition, and while not all demonstrate impairment in emotion processing (Harms et al. 2010), there are many studies that do (e.g. Ashwin et al. 2006; Lindner and Rosen 2006; Dziobek et al. 2010; Greimel et al. 2010; Lartseva et al. 2014; Chen et al. 2015; Brewer et al. 2016; Hubbard et al. 2017). By contrast, there have been relatively few studies on the production of emotional expressions in children with ASD. Observational studies in naturalistic settings have shown atypical use of facial expressions (Yirmiya et al. 1989; Capps et al. 1993; Dawson et al. 1990; Kasari et al. 1990; Bieberich and Morgan 2004; Stagg et al. 2014). For example, Yirmiya et al., (1989) compared children with ASD and a matched control group with developmental delay (DD) in their use of emotional facial expressions using the Early Social Communication Scales (Mundy et al. 2003), a videotaped structured observational measure. Children with ASD displayed significantly less affect in their facial expressions compared to DD controls. Similarly, Capps et al., (1993) found that parents of children with ASD report that their children show less positive affect in their facial expressions compared to parents of typically developing (TD) children.

The production of facial expressions has also been examined in more controlled experimental studies. For example, MacDonald et al. (1989) took photographs of high functioning adults with ASD and neurotypical (NT) controls producing facial expressions for five different emotions (‘happy’, ‘sad’, ‘fear’, ‘angry’ and ‘neutral’). Judges (who were blind to experimental group) were asked to identify the emotion in these photos using a forced-choice rating system. The judges were significantly less likely to correctly identify the emotion expressed by individuals with ASD. Their facial expressions were rated as more “odd” relative to those produced by NT children. Volker et al. (2009) compared a group of 6–13 year old high-functioning children with ASD with matched TD children on their ability to enact facial expressions for six basic emotions (‘happy’, ‘sad’, ‘fear’, ‘anger’, ‘surprise’ and ‘disgust’). Participants in both groups were read a statement and asked to show a targeted emotion from the statement (e.g. “I want you to think of a time that you were really, really happy… show me a happy face.”). Six raters blind to group membership scored the children’s enacted facial expressions in terms of accuracy and oddity. Raters were less able to recognize expressions of sadness produced by children with ASD than by TD children, and facial expressions of the children with ASD were more likely to be rated as “odd” compared to TD expressions. More recently, Brewer et al., (2016) investigated ASD and NT adults’ ability to recognise emotional expressions produced by ASD and NT models. This was the first study to use raters with ASD in addition to NT raters. They found that ASD expressions were more poorly recognized than NT expressions regardless of recognizer group (ASD or NT), suggesting that the atypical emotional expressions are idiosyncratic, rather than systematic, and were not shared with other individuals with ASD. Together, these studies provide strong evidence that the production of facial expressions for emotion is impaired in ASD in the hearing population.

In addition to facial expression, several studies have demonstrated unusual prosody in speech production in ASD such as: monotone intonation (speech with narrow pitch range); rate of speech being too fast; limited or unusual pitch ranges; poor volume modulation; and more frequent errors in residual articulation distortion (Baltaxe and Guthrie 1987; Shriberg et al. 2001; Peppé et al. 2006; Hubbard and Trauner 2007). Grossman et al. (2013) compared young people with ASD and matched TD controls in their production of both vocal and facial expressions during a spoken narrative in a story retelling task. Raters blind to group membership coded the narrative videoclips for expressed emotion, intensity of expression, and naturalness/awkwardness of expression. Both groups produced vocal and facial expressions that were categorically accurate, but their productions were rated as being qualitatively different, with the ASD group producing fewer natural and more awkward vocal and facial expressions.

There have been very few studies investigating ASD in deafness (Quinto-Pozo et al. 2011; Hansen and Scott 2018). Studies of communication skills in deaf children with ASD have identified characteristics of their communication equivalent to those found in hearing children with ASD (Scawin 2003; Shield and Meier 2012; Szymanski et al. 2012; Shield et al. 2015, 2016, 2017). For example, confusing self and other in both gesture and use of pronouns occurs in hearing children with ASD (Lee et al. 1994). Similar patterns are observed in deaf signing children with ASD, who showed a tendency to reverse palm orientation on signs that must be specified for inward/outward orientation, as well as difficulty using American Sign Language pronouns, which tended to be avoided in favour of names (Shield and Meier 2012; Shield et al. 2015). In a study of comprehension of emotional facial expressions in sign language in deaf ASD and TD groups, we have found that the ASD group showed a deficit during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD (Denmark et al. 2014).

Deaf individuals must attend to the face in order to communicate, but little is known about attention to faces in deaf people with ASD and whether or not they are impaired in aspects of sign language communication involving the face. Two case studies of deaf individuals with ASD have described impairments in the use of facial expressions. Poizner et al. (1990) investigated the signing skills of a deaf adult signer with ASD (Judith), who had been exposed to sign language from birth through communication with her deaf parents. Despite an optimal native signing environment she had production deficits, with a distinct absence of facial expressions or gesture in her signed output. Morgan et al. (2002) described a hearing linguistic savant who was described as having mild autism (Christopher) who showed fluency in multiple spoken languages. A qualified deaf tutor taught British Sign Language to Christopher and a comparison group of hearing BSL learners, with emphasis on the core grammatical properties of the language. At each stage, their progress was recorded. Christopher used minimal facial expressions when signing, and found the use of facial action difficult, especially its coordination with manual signing. This affected his ability to use facial actions to produce prosodically correct BSL. For instance, he was unable to indicate negation and question-type using his face. However, it should be noted that these are single case observations, so we should be cautious in generalising the findings to a larger population.

Most studies examining the production of facial expressions in hearing people use controlled methods to elicit expressions; participants are presented with photos or video-clips of emotional expressions and are asked to imitate the facial expression. In others, participants may be asked to produce a facial expression appropriate to a particular scenario (e.g. when they are feeling sad or in response to a verbal label, e.g. MacDonald et al. 1989; Volker et al. 2009). There are also a number of studies that look at the production of spontaneous and naturalistic facial expressions of emotion. For example, Sebe et al. (2007) covertly filmed participants while they were presented with emotional stimuli, and Zeng et al. (2006) used naturalistic observation, recording facial expressions produced during conversations with an experimenter. A problem with this methodology is the difficulty of ensuring that all participants produce sufficient and similar content to elicit emotional facial expressions. This approach would therefore not work well for individuals with ASD who may have reduced conversational skills relative to a control group (Capps et al. 1998). An alternative method to elicit facial expressions in deaf signers is to use a short narrative and question–answer paradigm (Hendriks 2007). Hendriks argued that the benefit of using narratives when assessing facial expression production is that the topic, content and length of discourse can be controlled among individuals. Another advantage is that it reduces demands on memory. For these reasons, we adopted a similar methodology, using the BSL Narrative Production Test (BSLPT: Herman et al. 2004) to explore how deaf individuals with ASD and deaf TD controls compared in their production of facial expressions when using sign language. The BSLPT has been previously used in research contexts to explore language skills in deaf children with specific language impairment (Herman et al. 2014) and an adapted version has been used with deaf children who use spoken language (Jones et al. 2016).

The focus of this study was on the production of facial actions in describing an event using sign language, where paralinguistic (emotional and prosodic) expressions were likely to be elicited.1 There are two hypotheses: the first (the null hypothesis) is that it is possible that there will be no effect of ASD status on deaf children’s production of facial expressions in a BSL narrative task. This could occur if exposure to sign language and the need for deaf people to attend to facial actions may provide some protection against anomalies in emotional expression production that have been reported for hearing children with ASD. An alternative hypothesis is that emotional processing deficits associated with ASD may be more critical in determining emotional expression production which will not be improved by increased attention to faces for deaf children with ASD. In that case, given that hearing individuals with ASD have both reduced quantity and quality of emotional facial expressions relative to TD controls (Yirmiya et al. 1989; Volker et al. 2009; Brewer et al. 2016) and impaired prosody in production of speech (Peppé et al. 2006; Hubbard and Trauner 2007; Hubbard et al. 2017), we would expect the deaf ASD group to produce fewer and less appropriate facial expressions than deaf TD controls when producing a BSL narrative.



Twelve TD deaf individuals were recruited from deaf schools across the UK. Ten deaf individuals with ASD were recruited from the National Deaf Child and Adolescent Mental Health Service, where they had received a diagnosis of ASD from a specialist multidisciplinary social and communication disorders clinic for deaf children, using an adapted version of the Diagnostic Interview for Social and Communication Disorders (DISCO: Wing et al. 2002); a nonverbal battery measuring cognition (Leiter-R: Roid and Miller 1997) and a play assessment. A detailed method of assessing and diagnosing this population was used, due to the challenges associated with diagnosing deaf children with ASD, namely the lack of a gold standard for assessment, the heterogeneity of deaf populations and the overlap between certain behaviours in both deafness and ASD; for instance, not responding to their name being called (Mood and Shield 2014). This battery of tasks was used as part of the standard protocol to assist with diagnosis in this complex population and rule out other conditions. The ASD group was made up of two individuals with a diagnosis of Asperger’s syndrome and the remaining eight had diagnoses of childhood autism. Diagnosis met the criteria in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (American Psychiatric Association 1994). At the time of recruitment, this was the most comprehensive assessment for deaf individuals with ASD in the UK.

All participants had bilateral severe or profound sensorineural hearing loss. Use of amplification was similar across both groups [ASD group: cochlear implant (5), hearing aids (4) and unaided (1); control group: cochlear implant (4), hearing aids (5) and unaided (3)]. In order to meet the inclusion criteria for the study, participants needed to be able to communicate using sign language at least at a phrasal level. One child in each group was a native signer with deaf parents, the remaining participants were all from hearing families. All participants were in bilingual education environments and were exposed to BSL in the classroom at the time of testing, which ensured greater consistency in signing abilities.

The groups were matched for chronological age, nonverbal intellectual ability using the Raven’s Standard Progressive Matrices (SPM: Raven et al. 1998) and BSL comprehension using the BSL Receptive Skills Test (BSLRST: Herman et al. 1999). Independent sample t tests indicated no significant difference between the groups in chronological age [t (21) = − .85, p > .40], nonverbal ability [t (21) = .15, p > .78] or BSL comprehension [t (20) = − .25, p > .94].

The Social Responsiveness Scale (SRS: Constantino and Gruber 2005) is a short checklist, which was given to teachers of all participants to further confirm the diagnosis of ASD. Higher scores on the SRS indicate greater severity of social impairment; T-scores below 60 are considered normal, T-scores between 60 and 75 indicate clinically significant impairment in the mild to moderate range, and T-scores above 75 indicate severe social impairment. As such, T-scores ≥ 60 are used to indicate that a diagnosis of ASD may be appropriate (Moul et al. 2014). In support of the independent diagnostic criteria for ASD in the experimental group, the SRS confirmed significant differences between the deaf ASD group and the deaf TD control group [t (21) = − 6.38, p < .001). Ethical approval was obtained from the ethics committee at University College London and the National Research Ethics Service, which is the ethical body for the NHS.

Materials and Procedure

Facial Expressions in the BSLPT

The BSL Production Test is a standardised test of sign language production for deaf children aged 4–11 (Herman et al. 2004). It requires the respondent to produce a coherent narrative in BSL by re-telling a 3-min, language-free video scenario with two child protagonists silently acting out a story. While the story is fairly complex, the test has been normed on deaf children as young as 4 years old who have been able to produce an appropriate narrative. The scenario was designed to elicit a range of lexical, morphological, grammatical and pragmatic features. The gist of the story is that the boy demands various food and drink items from the girl at different points in time, until she tricks him by putting a spider in a sandwich and giving it to him (see “Appendix” section). Retelling requires a signer to depict not only the actions of the protagonists but also their emotions which change through the scenario. The signers retold the story to the experimenter who is a hearing native BSL user. For the purpose of this study, adult native BSL signers determined where in the story a specific facial expression would be expected in a naturally signed re-telling of the story, to create a scoring template of emotional facial expressions. Sometimes, but not always, identification instances coincided with manual action representing a lexical emotion label. For example, the ‘demand’ facial expression would typically occur simultaneously with the manual sign for DEMAND. However, there were other instances where the emotional responses of the protagonists needed to be incorporated without lexicalisation. For example, the facial expression for ‘surprise’ would be expected in the re-telling of the scenario event where a protagonist found a spider in his mouth—even though this re-enactment need not include the manual sign SURPRISE (Table 1).

Table 1

Shows groups were matched on age, nonverbal intellectual ability and BSL comprehension





Raven SPM Raw score

BSLRST Raw score


TD (n = 12)
















ASD (n = 10)
















They differed on the SRS, a measure of ASD symptomology

Raters determined that the two actors produced the following six facial expressions during the video: demand, refusal, annoyance, surprise, mischief and disgust. Two of these are considered to be basic universal emotions: ‘disgust’ and ‘surprise’ (Ekman 2000). Measured variables were: number; timing; quality of facial expressions in the BSL narrative produced by the children, in relation to those expressions as identified by the raters. There were 16 points in the video scenario for which raters identified specific facial expressions which would be expected in re-telling the story.

Table 2 shows the number of emotional facial expressions that were present in the original acted scenario. A similar scoring system to McIntire and Reilly (1988) was used, where quantity and quality of facial expressions were coded. Within each expression the quantity of facial actions was scored. The scoring system was akin to the Facial Action Coding System (FACS), targeting the key facial actions in emotional expressions of BSL. The FACS is a system designed for human observers to describe changes in the facial expression in terms of visually observable activations of facial muscles (Ekman and Friesen 1978; Pantic and Rothkrantz 2004). FACS training is to enable the expert FACS user to distinguish differences between and within seven facial expressions. Here, the aim was different: it was to identify a (small) number of possible expression categories that fitted the narrative produced, and to use these to score respondents facial actions. However the FACS principles (identify the appropriate muscle groups used from video clips) underpinned the method for assessing the number and quality of facial actions—both in identifying the expressions used by the model storytellers (‘gold standard’, see below) and in assessing the quality of the expressions produced by respondent children. The experimenter met with a linguist specializing in sign language to discuss developing a suitable scoring system and then subsequently went on to train the second rater in how to use it.

Table 2

Number of emotional expressions produced by the characters in the BSLPT video

Affective expression

Number of times produced in narrative




















Administering and Scoring BSL Retelling

Using BSL, the experimenter instructed participants to watch the video on a laptop computer and to re-tell the story afterwards, emphasising that they should include as much detail as possible in their re-tellings. The experimenter then left the room. After they produced their narrative, they were asked three questions to test their general understanding of the scenario. Narratives produced by the participants and responses to questions were video recorded for later analysis.

Acquiring Gold Standard Measures of Expressions in the BSLPT Retelling: Adult Signers

Two deaf adult native signers watched the BSLPT event and produced a re-telling on video. Adult native signers were selected because they have acquired sign language from birth and therefore represent best case examples for this production task (McIntire and Reilly 1988; Costello et al. 2006; Cormier et al. 2013). The facial actions within the signed narratives of the two adults were analysed and a maximum score was created for each expression. For example, ‘demand’ consists of five facial action targets to be produced correctly (see Table 3). The analysis of these adult facial actions within a given expression formed the baseline against which the children’s videoed data were scored. The adult re-tellings were used as a gold standard to compare the childrens’ facial expression scores. All video responses were annotated using sign language annotation software: EUDICO Linguistic Annotator (ELAN; Lausberg and Sloetjes 2009).

Table 3

Emotional expressions in the BSLPT and their corresponding facial actions

Affective expression

No of facial action targets based on the FACS coding of the adult native signers


(1) Head push forward

(2) Furrow or raise eyebrows intensely

(3) Widen eyes

(4) Downturned closed mouth 5) Blink slowly


(1) Head shake

(2) Furrowed eyebrows

(3) Frown

(4) Wrinkled nose


(1) Roll eyes

(2) Raise eyebrows

(3) Shrug shoulders


1) Widen eyes

(2) Raise eyebrows


(1) Look from side to side

(2) Raise shoulders

(3) Raise eyebrows


(1) Raise eyebrows

(2) Widen eyes

(3) Mouth wide open

(4) Furrowed eyebrows

(5) Tilted back, body or head

Scoring the Children’s BSL Narratives

Two raters scored the childrens’ video narratives. The first rater (the experimenter) was a native sign language user and was aware of the participant group status of the individuals. These ratings were then sampled by a deaf native sign language user who was blind to group membership. The second rater scored 20% of the BSLPT videos (10% from each group). Both raters were required to identify when each expression was produced and label it in the ELAN video file (see Fig. 1 for an example of the ELAN transcript from the adult video). Then they scored the number of appropriate facial actions within each expression in accordance with the scoring criteria in Table 3. Finally, they rated each expression for similarity to the adult model expressions, specifically whether the facial actions were ‘identical’, ‘similar’, ‘missing’ or ‘miscellaneous’. The adult re-tellings were seen alongside the children’s actions in ELAN on the computer screens for direct comparison. If the participant produced some but not all of the facial action targets for a given expression then this was scored as ‘similar’. So, if 2/3 facial action targets were produced for ‘annoyance’ but the signer did not shrug the shoulders (see Table 3 for further detail) this was counted as ‘similar’. If no targeted facial action was produced at a expected point in the narrative, then this was coded as ‘missing’. Expressive facial actions, which did not correspond with those produced by the model, were coded as miscellaneous.2 Raters scored the videos independently for facial expression similarity, and then reached a consensus through detailed discussion to agree the final rating used in this analysis. Intraclass correlations between the independent ratings showed overall agreement between raters was .94.

Fig. 1

Example of one of the deaf adult signers producing the ‘demand’ facial expression using ELAN

Each participant’s score was calculated on the basis of number of productions and how many facial actions they produced for each expression relative to the maximum possible score for that expression. For example, one participant produced three ‘demand’ facial expressions in their narrative. The maximum (gold-standard) score is five facial action targets for each ‘demand’ expression and this participant scored 11 out of 15 for ‘demand’ overall, missing four facial actions in total. Overall scores for each participant were then converted into percentages, by calculating the total number of facial actions divided by the maximum number of facial actions that could have been produced. So the participant described above produced 32/44 facial actions, and a facial action score of 72%. In this way, the child participants were compared on their number of overall facial actions produced (for all expressions combined) relative to the adult gold standard measures.


The BSLPT includes three screening questions to ensure participants had attended to and understood what happened in the scenario. The questions are asked in BSL. The translated English equivalents are: (1) what was on the tray? (2) Why did the boy throw the spider? and (3) why did the girl tease the boy? Each question is worth 2 points, and there is a maximum obtainable score of 6 points.

There were no differences between groups for story comprehension as measured by the BSLPT content questions [t(22) = − .69, p > .05, \(\eta _{p}^{2}\) = − 0.29), indicating that both groups had equivalent understanding of the narrative.

Non-parametric Mann Whitney tests were used to compare the groups. The groups did not differ significantly on the length of narratives produced, although the ASD group had shorter narratives [U(25) = 41, p > .05, \(\eta _{p}^{2}\) = − 0.58, (median duration in minutes.seconds: ASD: 2.52, TD: 3.23)].3 There was no significant difference between the TD and ASD groups in the number of overall facial action targets produced [U(22) = 53, p > .05, \(\eta _{p}^{2}\) = − 0.92]. Because of the natural variation of expression in facial actions in BSL (Sutton-Spence and Woll 1999), the deaf adult signers did not produce the maximum number of facial actions each time for each expression. Therefore the overall number of facial action targets produced by the deaf adults (using Table 3 as a scoring guide) was 68.2% (median value). For the TD group this value was 55.8% and for the ASD group 45.4%.

Group Differences

Figure 2 shows how both groups fared with specific emotional expressions. The TD group produced significantly more facial actions than the ASD group for ‘demand’ [U(22) = 29.5, p < .05, \(\eta _{p}^{2}\) = − 0.96] and ‘mischief’ [U(22) = 35.5, p < .05, \(\eta _{p}^{2}\) = − 0.61] (see Table 4 for means and standard deviations). There were no significant group differences across the other four expressions. There were no significant differences between groups for the number of missing facial expressions [U(22) = 98.5, p > .05, \(\eta _{p}^{2}\) = 0.77). However, the frequency of face expressions rated ‘identical’ to the gold standard differed significantly between groups, with the TD group producing a greater number of identical expressions compared to the ASD group [U(22) = 37.5, p < .05, \(\eta _{p}^{2}\) = − 0.70] (Fig. 3). There were no significant differences between groups in the number of similar or miscellaneous facial expressions produced.

Fig. 2

Production of specific, appropriate emotional facial expressions for ASD and TD groups

Table 4

Mean and standard deviation of facial action targets for each emotion


Deaf ASD: M (SD)

Deaf control: M (SD)


23.6 (21.2)

48.1 (28.4)


41.5 (25.1)

47.6 (33.6)


42.6 (33.0)

63.8 (36.1)


47.5 (44.7)

37.5 (37.6)


67.3 (27.8)

49.4 (21.5)


29.3 (30.0)

35.1 (34.0)

Fig. 3

Quality of emotional facial expressions produced in the BSLPT

The relationship between SRS severity and facial expression production was explored. There were significant correlations between SRS scores and two expressions: ‘annoyance’ r (11) = − .82, p < .05, and ‘mischief’ r(11) = − .77, p < .05. That is, higher SRS scores (more reported autistic traits) were associated with fewer ‘annoyance’ and ‘mischief’ facial actions.

To test whether differences in general sign language or narrative abilities might account for differences in emotional facial expression production, scores on the BSLPT were compared (see Herman et al. 2004, for example of BSLPT scoring system). Scores were calculated to determine how participants in both groups fared with narrative content (this was scored by awarding a point for explicit mention of each of 16 narrative episodes, maximum score = 16), narrative structure (scored for orientation, complicating actions, climax, resolution and sequence, maximum score = 12) and grammar (scored by the correct use of five classes of morphological inflections, maximum score = 30, reflecting the number of different verb forms targeted and for role shift). Participants’ scores on these three parameters were marked using the BSLPT scoresheet.

A Kruskal–Wallis non-parametric test was calculated for raw scores on each test parameter. No differences were found between groups for narrative content (H(1) = .132, p = .71), narrative structure (H(1) = .89, p = .018) or grammar (H(1) = 2.132, p = 1.44).


This is the first study to investigate the frequency and quality of facial expressions produced during a sign language narrative in deaf children with and without ASD. No differences were found between groups in terms of the total number of facial expressions produced, but there was a significant difference in the quality of facial actions produced. Deaf children with ASD compared with TD deaf children produced fewer expressions which corresponded to those produced by adult deaf signers when they retold the story. When individual expressions were analysed, the groups did not differ significantly on four of the six facial expressions scored, however deaf children with ASD produced significantly fewer facial actions for ‘demand’ and ‘mischief’ relative to deaf controls.

The reduced quality of facial expressions in the deaf ASD group was broadly consistent with the finding of emotion production impairments (MacDonald et al. 1989; Volker et al. 2009; Grossman et al. 2013) in hearing groups with ASD (Fig. 4).

Fig. 4

Narrative content, structure and grammar scores for the TD and ASD groups

Our findings do not support the hypothesis that emotional facial expressions might be preserved in deaf people with ASD because of a protective effect afforded by deafness and sign language use. Instead, the results suggest that deaf signing children with ASD show impairments in emotional facial expression production because deficits in emotion processing and theory of mind are central to autism and disrupt this ability in signers, as they do for users of a spoken language. Further research is needed to support this initial hypothesis. The current study shows impairment in the quality of some facial expressions in deaf ASD children, analogous to ‘odd’ expressions produced by hearing individuals with ASD (MacDonald et al. 1989; Volker et al. 2009). It is not possible to identify from our results the underlying cause of these production impairments. A number of studies with hearing ASD participants have noted the high co-occurrence with autism of alexithymia [a personality trait characterized by a marked dysfunction in one’s own emotional awareness (Cook et al. 2013; Trevisan et al. 2016)]. Brewer et al. (2016), in a study with hearing ASD participants and controls, ruled out both a reduced proprioceptive awareness of facial muscles and a reduced motivation to produce information on the face as causes, arguing that atypical cognitive representations of facial emotions are more likely to underlie poor production of emotional expressions.

The severity of social communication impairment scored using the Autism Diagnostic Observation Schedule, (ADOS: Lord et al. 1999) has been found to correlate with vocal and facial ‘awkwardness’ (Grossman et al. 2013). We also found correlations between severity of social communication impairment in deaf children (as measured by SRS) and scores for facial expressions. In our study this correlation was for the expressions ‘annoyed’ and ‘mischief’. SRS scores were not associated with the ‘demand’ expression. This may be a result of its decreased salience relative to the other expressions. It has been proposed that impairments in emotion recognition in ASD are greater for more subtle expressions of emotion (Griffiths et al. 2017).

Deaf children with ASD did not differ from controls in their ability to produce a coherent and linguistically correct narrative. The duration, content, structure and use of grammatical features in their narratives were remarkably similar to typically developing deaf children. Therefore group differences in production of targeted emotions were unlikely to reflect differences in linguistic skills or in understanding the model scenario. Both groups were matched on their BSL receptive skills and they did not significantly differ in performance on the narrative content, structure or grammar scales of the narrative production test. Nevertheless the deaf ASD group had poorer performance on the grammar scale, which may suggest that this subgroup have superior comprehension skills in comparison to production of BSL. Further research could explore this hypothesis. In both groups, nine participants used hearing aids and cochlear implants, so there is a possibility that they were receiving auditory input as well as visual and signed information. This may suggest that it is sign language exposure rather than deafness per se that has led to increased attention to the face and better communication skills in this population, however this question remains currently unexplored.

It is of interest that the deaf ASD group produced more facial actions for the canonical expression ‘disgust’. Volker et al. (2009) also found a non-significant trend where hearing individuals with ASD produced more examples of this expression than TD controls. Our finding could have been influenced by the fact that ‘disgust’ occurs at the end of the narrative during the climax of the story. Endpoints of narratives are usually recalled more accurately due to the recency effect and the because endpoints often mark the semantic climax of a story (Poulsen et al. 1979; McCabe and Peterson 1990). It has also been argued that ‘disgust’ is a particularly salient emotion having evolutionary significance as a response to potential contamination (Rozin and Fallon 1987). It was one of the most distinctive expressions produced in the scenario, which may have meant it was picked up more readily by the ASD group, whose expression processing skills may be more marked for less intense emotions (Wong et al. 2012; Griffiths et al. 2017). It would be of interest to further explore whether the intensity of emotions produced in the scenario had any impact on their production during the re-telling.

When the adult deaf signers modeled the narrative, they produced not only classic emotional expressions (‘disgust’, ‘surprise’, ‘annoyance’ (anger)) but also expressions which could involve both emotion and other cognitive and communicative features (‘mischief’, ‘demand’ and ‘refusal’). This is likely to be because the narrative itself involved a series of communicative events including a scenario in which one character deceives another (a girl hiding a pretend spider inside a sandwich that she knows a boy will eat). Indeed, it could be argued that a theory of mind (ToM) is required to fully understand the narrative. Hearing individuals with ASD have been shown to have deficits with ToM, and often fail false belief tasks (Baron-Cohen et al. 1993; Dennis et al. 2000). Although deaf children from hearing families who do not have exposure to language at an early age can also be delayed in ToM compared to TD hearing children, the delay is less severe than it is for hearing children with ASD (Peterson and Siegal 1999; Woolfe et al. 2002; Schick et al. 2007). It is possible then, that some participants failed to use appropriate facial acts in contexts where they did not fully understand the ToM elements of the story. If ToM is more impaired in the deaf ASD than the deaf control participants this may explain some differences in the use of facial expressions between the groups. For example, the deaf ASD group produced significantly fewer “mischief” facial acts; in the context of the narrative, understanding “mischief” involves understanding the characters’ mental states, as it refers to the girl’s deception in hiding the spider in the sandwich to trick the boy, who knows nothing of it. It is also possible that group differences in using the facial expression “demand” could be explained by differences in ToM understanding, at least in so much as understanding others’ desires has been described as a mentalising skill (Wellman et al. 2000). It is notable that ASD children produced more examples of ‘surprise’, ‘disgust’, ‘annoy/anger’ which are universal emotional facial expressions described by Ekman and Oster (1979) and relate to more instinctive emotional reactions rather than the mental states normally associated with ToM. However, they differed in the quality of facial acts across all categories. It is difficult to explain this reduction in quality in terms of understanding the ToM elements of the narrative. Instead, the quality of facial expressions produced in deaf ASD participants seems more likely to be linked to general deficits in production of emotional expressions in ASD (Blair 2003). The third question (asking why the girl teased the boy) hints at whether or not participants may have had ToM. Some were able to state ‘the girl was angry’ vs. ‘the girl did not like spiders’. However, this would need further clarification for a follow up study. We did not directly test narrative understanding in terms of ToM in this study, but our data suggests that this would be an interesting approach to take in future studies.

A strength of this study is the use of an ecologically-valid, narrative methodology to elicit facial expressions; this contrasts with more artificial techniques sometimes used where participants are asked to copy facial expressions. This did however mean that only a narrow range of expressions were explored, and did not include all of Ekman’s classic examples of facial expressions (Ekman and Oster 1979). We would recommended further studies using other narrative tasks designed to elicit more of the Ekman-categorised ‘universal’ facial emotions so groups can be compared across a wider range of emotions, which could also differ in the strength of the emotion portrayed. We also had a relatively small sample size due to the difficulty in recruiting deaf ASD participants. Any null group differences may be attributable to the lack of statistical power and caution should therefore be taken in extrapolating our findings to the wider deaf ASD and deaf TD populations. As recruitment was largely opportunistic, there was a discrepancy between the age the BSLPT is designed for (ages 4–11) and the age of the participants in the sample (ages 8.5–17). Future research on younger deaf children with ASD on this measure would be useful to demonstrate whether performance is poorer at a younger age as a result of less sign language exposure or due to developmental delay. In addition, the use of relatively large age ranges, e.g. from early childhood to adolescence, make it difficult to compare results between different studies to determine whether people with ASD show any developmental differences in their narrative production skills.

One relevant caveat of the current study is the lack of assessment of imitation or motor skills in deaf children with ASD. It is well known that children with ASD often have a range of motor deficits (Ghaziuddin and Butler 1998; Jansiewicz et al. 2006; McPhillips et al. 2014). This deficit in motor skills may have an impact on the production of BSL signs in deaf children with ASD and subsequently hinder the emergence of social communication skills.

Facial expressions in BSL convey both emotion and linguistic meaning. The present study focused on emotional facial expressions. The production of BSL linguistic facial expressions will be reported in a future paper. We expect that there will be differences in how both groups use emotional and linguistic facial expressions since linguistic expressions are more rule-governed, and may be easier for individuals with ASD to learn.

The findings from the current study on facial emotion production lend some support to the hypothesis that deaf children with ASD have difficulties similar to those of their hearing counterparts. However, there is still much more research to be done in this field. In particular, research carried out with hearing individuals with ASD exploring their ability to be taught and use facial actions in BSL could inform whether the use of sign language helps facilitate attention to the face and improve communication skills.

Impairments in facial expression production in deaf individuals with ASD are likely to impact on interactions with other deaf signers which may further exacerbate social communication (Shriberg et al. 2001; McCann and Peppé 2003; Paul et al. 2005). Further research is needed, focusing on interventions for this group, using adaptations to evidence-based approaches for hearing children with ASD (Watkins et al. 2015). In the first instance, deaf children with ASD should be taught to discriminate between emotional facial expressions in BSL and to produce them appropriately. These basic emotion recognition skills could then be developed further to enhance social communication. For example, deaf children with ASD could benefit from attending social skills groups, which are facilitated in sign language to foster social and communication skills. In addition, deaf children with ASD should be assessed during naturalistic social situations when communicating in BSL to compare whether self generated production of emotional facial expressions differ to those elicited from observations or narratives. This would lead to a greater awareness of the use of emotional facial expressions in BSL in this population and subsequently inform the development of further appropriate ecologically valid interventions. Interventions for hearing children with ASD that are focused on emotional prosody emphasise using stronger intonation cues e.g. lower pitch, longer tempo and louder amplitude (Wang et al. 2006; Matsuda and Yamamoto 2013). In a similar manner, teaching of non-manual prosodic markers in BSL could be made more explicit to deaf children with ASD so that facial actions are exaggerated, made more salient or taught more intensively to facilitate the learning and use of facial expressions. Alternatively deaf children with ASD could be taught compensatory cognitive or linguistic strategies (Rutherford and McIntosh 2007), for example, always producing the manual sign for the emotion that they are referring to in addition to the non-manual facial action to clarify meaning.

This study shows that deaf children with ASD show subtle differences in terms of their production of emotional facial actions during narrative retelling. These data are an important first step in documenting differences in sign language production for deaf children with ASD, with affected aspects of communication relating to emotion and mentalising about the internal state of others.


  1. 1.

    The production and comprehension of face acts intrinsic to linguistic processes is the topic of a separate report.

  2. 2.

    We excluded from this analysis facial expressions that were clearly linguistic aspects of the BSL production, such as grammatical markers.

  3. 3.

    This scoring procedure only takes account of targeted facial actions produced at precise narrative points by the children: other facial actions occurring during the retelling were not scored.



This work was supported by the Economic Social and Research Council funded Ph.D. stipend awarded to Tanya Denmark. The research of Dr Joanna Atkinson was supported by grants from the Economic and Social Research Council of Great Britain (RES-620-28-6001 and RES-620-28-0002): Deafness Cognition and Language Research Centre. The authors express their gratitude to Dr Robert Adam, Mischa Cooke and Ramas Rentelis for help with filming and coding and for comments on the methodology; Mike Coleman for support with experimental design; Professor Bencie Woll for comments and guidance; Dr Adam Schembri for linguistic advice; and Dr Constanza Moreno for clinical input and support and to all the children, schools and parents who participated in and supported this research study.

Author Contributions

All authors (TD, RC, JS, JA) contributed to the study, by jointly participating in the design, interpretation, statistical analysis and draft of the manuscript. The lead author (TD) additionally participated in the coordination of the study and recruitment and testing. All authors read and approved the final manuscript.


  1. Agrafiotis, D., Canagarajah, C. N., Bull, D. R., Kyle, J. G., Twyford, H. E., & Dye, M. (2006). A perceptually optimised video coding system for sign language communication at low bit rates. Signal processing: Image Communications, 21(7), 531–549.Google Scholar
  2. American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: American Psychiatric Association.Google Scholar
  3. Ashwin, C., Chapman, E., Colle, L., & Baron-Cohen, S. (2006). Impaired recognition of negative basic emotions in autism: A test of the amygdala theory. Social Neuroscience, 1(3–4), 349–363.Google Scholar
  4. Baltaxe, C. A. M., & Guthrie, D. (1987). The use of primary sentence stress by normal, aphasic and autistic children. Journal of Autism and Developmental Disorders, 17(2), 255–271.Google Scholar
  5. Baron-Cohen, S., Spitz, A., & Cross, P. (1993). Do children with autism recognise surprise? A research note. Cognition and Emotion, 7(6), 506–516.Google Scholar
  6. Bellugi, U., Grady, L., Lillo-Martin, D., Grady, M., Van Hoek, K., & Corina, D. (1990). Enhancement of spatial cognition in deaf children. In V. Volterra & C. J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 278–298). New York: Springer.Google Scholar
  7. Bettger, J. G., Emmorey, K., McCullough, S., & Bellugi, U. (1997). Enhanced facial discrimination: Effects of experience with American Sign Language. Journal of Deaf Studies and Deaf Education, 2(4), 223–233.Google Scholar
  8. Bieberich, A. A., & Morgan, S. B. (2004). Self-regulation and affective expression during play in children with autism or Down Syndrome: A short-term longitudinal study. Journal of Autism and Developmental Disorders, 34(4), 439–448.Google Scholar
  9. Blair, J. (2003). Facial expressions, their communicative functions and neurocognitive substrates. Philosophical Transactions of the Rioyal Society of London B, 358, 561–572.Google Scholar
  10. Brentari, D., & Crossley, L. (2002). Prosody on the hands and face: Evidence from American Sign Language. Sign Language and Linguistics, 5(2), 105–130.Google Scholar
  11. Brewer, R., Biotto, F., Catmur, C., Press, C., Happe, F., Cook, R., et al. (2016). Can neurotypical individuals read autistic facial expressions? Atypical production of emotional facial expressions in autism spectrum disorders. Autism Research, 9(2), 262–271.Google Scholar
  12. Campbell, R., Woll, B., Benson, P. J., & Wallace, S. B. (1999). Categorical perception of face actions: Their role in sign language and in communicative facial displays. Quarterly Journal of Experimental Psychology A, 52(1), 67–95. Scholar
  13. Capps, L., Kasari, C., Yirmiya, N., & Sigman, M. (1993). Parental perception of emotional expressiveness in children with autism. Journal of Consulting and Clinical Psychology, 61(3), 475–484.Google Scholar
  14. Capps, L., Kehres, J., & Sigman, M. (1998). Conversational abilities among children with autism and children with developmental delays. Autism, 2(4), 325–344.Google Scholar
  15. Carminati, M. N., & Knoeferle, P. (2013). Effects of speaker emotional facial expression and listener age on incremental sentence processing. PLoS ONE, 8(9), e72559. Scholar
  16. Chen, C. H., Lee, I.-Jui, & Lin, L. (2015). Augmented reality-based self-facial modelling to promote the emotional expression and social skills of adolescents with autism spectrum disorders. Resarch in Developmental Disabilities, 36, 396–403.Google Scholar
  17. Constantino, J. N., & Gruber, C. P. (2005). Social Responsiveness Scale. Los Angeles: Western Psychological Services.Google Scholar
  18. Cook, R., Brewer, R., Shah, P., & Bird, G. (2013). Alexithemia, not autism, predicts poor recognition of emotional facial expressions. Psychological Science, 24(5), 723–732.Google Scholar
  19. Corina, D. P., Bellugi, U., & Reilly, J. (1999). Neuropsychological studies of linguistic and affective facial expressions in Deaf signers. Language and Speech, 42, 307–331.Google Scholar
  20. Cormier, K., Smith, S., & Sevcikova, Z. (2013). Predicate structures, gesture and simultaneity in the representation of action in British Sign Language: Evidence from deaf children and adults. Journal of Deaf Studies and Deaf Education, 18(3), 370–390.Google Scholar
  21. Costello, B., Fernandez, F., & Landa, A. (2006). The non-(existent) native signer: Sign Language research in a small deaf population. In B. Costello, J. Fernández, & A. Landa (Eds.), Theoretical issues in Sign Language Research (TISLR) 9 Conference. Florianopolis, Brazil.Google Scholar
  22. Dachovsky, S., & Sandler, W. (2009). Visual intonation in the prosody of a sign language. Language and Speech, 52, 287–314.Google Scholar
  23. Dawson, G., Hill, D., Spencer, A., Galpert, L., & Watson, L. (1990). Affective exchanges between young autistic children and their mothers. Journal of Abnormal Child Psychology, 18(3), 335–345. (Erratum Journal of Abnormal Child Psychology (1991) 19(1), 115).Google Scholar
  24. Denmark, T., Atkinson, J., Campbell, R., & Swettenham, J. (2014). How do typically developing deaf children and deaf children with autism spectrum disorder use the face when comprehending emotional facial expressions in British Sign Language? Journal of Autism and Developmental Disorders, 44, 2584–2592.Google Scholar
  25. Dennis, M., Lockyer, L., & Lazenby, A. (2000). How high functioning children with autism understand real and deceptive emotion. Autism, 4, 370–379.Google Scholar
  26. Dziobek, I., Bahnemann, M., Convit, A., & Heekeren, H. R. (2010). The role of the fusiform-amygdala system in the pathophysiology of autism. Archives of General Psychiatry, 67(4), 397–405.Google Scholar
  27. Ekman, P. (2000). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of cognition and emotion (pp. 45–60). Sussex: John Wiley & Sons.Google Scholar
  28. Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto: Consulting Psychologists Press.Google Scholar
  29. Ekman, P., & Oster, H. (1979). Facial expressions of emotion. Annual Review of Psychology, 30, 527–554.Google Scholar
  30. Elliott, E. A., & Jacobs, A. M. (2013). Facial expressions, emotions and sign languages. Frontiers in Psychology, 4, 115.Google Scholar
  31. Emmorey, K., Thompson, R., & Colvin, R. (2009). Eye gaze during comprehension of American Sign Language by native and beginning signers. Journal of Deaf Studies & Deaf Education, 14(2), 237–243.Google Scholar
  32. Ghaziuddin, M., &. Butler, E. (1998). Clumsiness in autism and Asperger syndrome: A further report. Journal of Intellectual Disability Research, 42(1), 43–48.Google Scholar
  33. Greimel, E., Schulte-Rüther, M., Fink, G. R., Piefke, M., Herpertz-Dahlmann, B., & Konrad, K. (2010). Development of neural correlates of empathy from childhood to early adulthood: An fMRI study in boys and adult men. Journal of Neural Transmission, 117, 781–791.Google Scholar
  34. Griffiths, S., Jarrold, C., Penton-Voak, I. S., Woods, A. T., Skinner, A. L., & Munafo, M. R. (2017). Impaired recognition of basic emotions from facial expressions in young people with autism spectrum disorder: Assessing the importance of expression intensity. Journal of Autism and Developmental Disorders. Scholar
  35. Grossman, R. B., Edelson, L. R., & Tager-Flusberg, H. (2013). Emotional facial and vocal expressions during story retelling by children and adolescents with high-functioning autism. Journal of Speech, Language and Hearing Research, 56, 1035–1044.Google Scholar
  36. Hansen, S., & Scott, J. (2018). A systematic review of the autism research with children who are deaf or hard of hearing. Hammill Institute on Disabilities, 39(2), 330–334.Google Scholar
  37. Harms, M. B., Martin, A., & Wallace, G. L. (2010). Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies. Neuropsychology Review, 20(3), 290–322.Google Scholar
  38. Hauthal, N., Neumann, M. F., & Schweinberger, S. R. (2012). Attentional spread in deaf and hearing participants: Face and object distractor processing under perceptual load. Attention, Perception and Psychophysics, 74(6), 1312–1320.Google Scholar
  39. Hendriks, B. (2007). Negation in Jordanian Sign language: A cross-linguistic perspective. In P. Perniss, R. Pfau & M. Steinbach (Eds.), Visible variation: Cross-linguistic studies in sign language structure (pp. 104–128). Berlin: Mouton de Gruyter.Google Scholar
  40. Herman, R., Grove, N., Holmes, S., Morgan, G., Sutherland, H., & Woll, B. (2004). Assessing British Sign Language development: Production test (narrative skills). London: City University.Google Scholar
  41. Herman, R., Holmes, S., & Woll, B. (1999) Assessing British Sign Language development: Receptive skills test. Coleford: Essex UK Forest Book Services.Google Scholar
  42. Herman, R., Rowley, K., Mason, K., & Morgan, G. (2014). Deficits in narrative abilities in child British Sign Language users with specific language impairment. International Journal of Language and Communication Disorders, 49(3), 343–353.Google Scholar
  43. Hubbard, D. J., Faso, D. J., Assman, P. F., & Sasson, N. J. (2017). Production and perception of emotional prosody by adults withautism spectrum disorder. Autism Research, 10(12), 1991–2001.Google Scholar
  44. Hubbard, K., & Trauner, D. A. (2007). Intonation and emotion in autistic spectrum disorders. Journal of Psycholinguistic Research, 36, 159–173.Google Scholar
  45. Jansiewicz, E. M., Goldberg, M. C., Newschaffer, C. J., Denckla, M. B., Landa, R., & Mostofsky, S. H. (2006). Motor signs distinguish children with high functioning autism and Asperger’s syndrome from controls. Journal of Autism and Developmental Disorders., 36(5), 613–621.Google Scholar
  46. Jones, A. C., Toscano, E., Botting, N., Marshall, C. R., Atkinson, J. R., Denmark, T., et al. (2016). Narrative skills in deaf children who use spoken English: Dissociations between macro and microstructural devices. Research in Developmental Disabilities, 59, 268–282.Google Scholar
  47. Kasari, C., Sigman, M., Mundy, P., & Yirmiya, N. (1990). Affective sharing in the context of joint attention interactions of normal, autistic, and mentally retarded children. Journal of Autism and Developmental Disorders, 20(1), 87–100.Google Scholar
  48. Lartseva, A., Dijkstra, T., Kan, C. C., & Buitelaar, J. K. (2014). Processing of emotion words by patients with autism spectrum disorders: Evidence from reaction times and EEG. Journal of Autism and Developmental Disorders, 44(11), 2882–2894.Google Scholar
  49. Lausberg, H., & Sloetjes, H. (2009). Coding gestural behavior with the NEUROGES-ELAN system. Behavior Research Methods, Insruments, & Computers, 41(3), 841–849. Scholar
  50. Lee, A., Hobson, R. P., & Chiat, S. J. (1994). I, you, me, and autism: An experimental study. Journal of Autism and Developmental Disorders, 24(2), 155–176.Google Scholar
  51. Lindner, J. L., & Rosen, L. A. (2006). Decoding of emotion through facial expression, prosody and verbal content in children and adolescents with Asperger’s syndrome. Journal of Autism and Developmental Disorders, 36(6), 769–777.Google Scholar
  52. Lord, C., Rutter, M., DiLavore, P. C., & Risi, S. (1999). Autism Diagnostic Observation Schedule-WPS (ADOS-WPS). Los Angeles: Western Psychological Services.Google Scholar
  53. MacDonald, H., Rutter, M., Howlin, P., Rios, P., Le Couteur, A., Evered, C., & Folstein, S. (1989). Recognition and expression of emotional cues by autistic and normal adults. Journal of Child Psychology and Psychiatry, 30, 865–877.Google Scholar
  54. MacSweeney, M., Capek, C. M., Campbell, R., & Woll, B. (2008). The signing brain: The neurobiology of sign language. Trends in Cognitive Science, 12(11), 432–440.Google Scholar
  55. Matsuda, S., & Yamamoto, J. (2013). Intervention for increasing the comprehension of affective prosody in children with autism spectrum disorders. Research in Autism Spectrum Disorders, 7(8), 938–946.Google Scholar
  56. McCabe, A., & Peterson, C. (1990). What makes a narrative memorable? Applied Psycholinguistics, 11, 73–82.Google Scholar
  57. McCann, J., & Pppé, S. (2003). Prosody in autism spectrum disorders: A critical review. International Journal of Language and Communication Disorders, 38(4), 325–350.Google Scholar
  58. McIntire, M. L., & Reilly, J. S. (1988). Nonmanual behaviors in L1 & L2 learners of American Sign Language. Sign Language Studies, 61, 351–375.Google Scholar
  59. McPhillips, M., Finlay, J., Bejerot, S., & Hanley, M. (2014). Motor deficits in children with autism spectrum disorder: A cros-syndrome study. Autism Research, 7, 664–676.Google Scholar
  60. Megreya, A. M., & Bindemann, M. (2017). A visual processing advantage for young-adolescent deaf observers: Evidence from face and object matching tasks. Scientific Reports, 7, 1–6.Google Scholar
  61. Mitchell, T. V., Letourneau, S. M., & Maslin, M. C. (2013). Behavioral and neural evidence of increased attention to the bottom half of the face in deaf signers. Restorative Neurology and Neuroscience., 31(2), 125–139.Google Scholar
  62. Mood, D., & Shield, A. (2014). Clinical use of the Autism Diagnostic Observation Schedule-second edition with children who are deaf. Seminars in Speech and Language, 35(4), 288–300.Google Scholar
  63. Morgan, G., Smith, N., Tsimpli, I., & Woll, B. (2002). Language against the odds: The learning of British Sign Language by a polyglot savant. Journal of Linguistics, 38, 1–41.Google Scholar
  64. Moul, C., Cauchi, A., Hawes, D. J., Brennan, J., & Dadds, M. R. (2014). Differentiating Autism Spectrum Disorder and overlapping psychopathology with a brief version of the Social Responsiveness Scale. Child Psychiatry and Human Development, 46(1), 1–11.Google Scholar
  65. Mundy, P., Delgado, C. E., Block, J., Venezia, M., Hogan, A., & Seibert, J. (2003). A manual for the abridged Early Social Communication Scales (ESCS). Florida: University of Miami Psychology Department, Coral Gables.Google Scholar
  66. Pantic, M., & Rothkrantz, L. J. M. (2004). Facial action recognition for facial expression analysis from static face images. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 34(3), 1449–1461.Google Scholar
  67. Paul, R., Shriberg, L. D., McSweeny, J., Cicchetti, D., Klin, A., & Volkmar, F. (2005). Brief report: Relations between prosodic performance and communication and socialization ratings in high functioning speakers with autism spectrum disorders. Journal of Autism and Developmental Disorders, 35(6), 861–869.Google Scholar
  68. Peppé, S., McCann, J., Gibbon, F. O., Hare, A., & Rutherford, M. (2006). Assessing prosodic and pragmatic ability in children with high functioning autism. Journal of Pragmatics, 38, 1776–1791.Google Scholar
  69. Peterson, C. C., & Siegal, M. (1999). Representing inner worlds: Theory of mind in autistic, deaf and normal hearing children. Psychological Science, 10, 126–129.Google Scholar
  70. Poizner, H., Klima, E. S., & Bellugi, U. (1990). Signers with strokes: Left hemisphere lesions, Chap. 3. In H. Poizner, E. S. Klima, & U. Bellugi (Eds.), What the hands reveal about the brain (pp. 133–150). London: MIT Press.Google Scholar
  71. Poulsen, D., Kintsch, E., & Kintsch, W. (1979). Children’s comprehension and memory for stories. Journal of Experimental Child Psychology, 28, 379–403.Google Scholar
  72. Quinto-Pozo, D., Forber-Pratt, A. J., & Singleton, J. L. (2011). Do developmental communication disorders exist in the signed modality? Perspectives from professionals. Language, Speech and Hearing Services for Schools, 42(4), 423–443.Google Scholar
  73. Raven, J., Raven, J. C., & Court, J. H. (1998). Raven’s Progressive Matrices and Vocabulary Scales. Oxford: Oxford Psychologists Press.Google Scholar
  74. Roid, G. H., & Miller, L. J. (1997). Leiter International Performance Scale-Revised. Wood Dale: Stoelting Co.Google Scholar
  75. Rozin, P., & Fallon, A. E. (1987). A perspective on disgust. Psychological Review, 94(1), 23–41.Google Scholar
  76. Rutherford, M. D., & McIntosh, D. N. (2007). Rules versus prototype matching: Strategies of perception of emotional facial expressions in the autism spectrum. Journal of Autism and Developmental Disorders, 37(2), 187–196.Google Scholar
  77. Scawin, C. (2003). Communication and social interaction checklist for deaf children who sign. MSc project. London: City Univerity.Google Scholar
  78. Schick, B., De Villiers, P., De Villiers, J., & Hoffmeister, R. (2007). Language and theory of mind: A study of deaf children. Child Development, 78(2), 376–396.Google Scholar
  79. Sebe, N., Lew, M. S., Cohen, I., Sun, Y., Gevers, T., & Huang, T. (2007). Authentic facial expression analysis. Proceedings of international conference on automatic face and gesture recognition. Image and Vision Computing, 25, 1856–1863.Google Scholar
  80. Shield, A., Cooley, F., & Meier, R. P. (2017). Sign language echolalia in deaf children with autism spectrum disorder. Journal of Speech, Language and Hearing Research, 60(6), 1622–1634.Google Scholar
  81. Shield, A., & Meier, R. P. (2012). Palm reversal errors in native-signing children with autism. Journal of Communication Disorders, 45(6), 439–454.Google Scholar
  82. Shield, A., Meier, R. P., & Tager-Flusberg, H. (2015). The use of sign language pronouns by native-signing children with autism. Journal of Autism and Developmental Disorder, 45(7), 2128–2145.Google Scholar
  83. Shield, A., Pyers, J., Martin, A., & Tager-Flusberg, H. (2016). Relations between language and cognition in native-signing children with autism spectrum disorder. Autism Research, 9(12), 1304–1315.Google Scholar
  84. Shriberg, L. D., Paul, R., McSweeny, J. L., Klin, A. M., Cohen, D. J., & Volkmar, F. R. (2001). Speech and prosody characteristics of adolescents and adults with high-functioning autism and Asperger syndrome. Journal of Speech Language & Hearing Research, 44(5), 1097–1115.Google Scholar
  85. Stagg, S. D., Slavny, R., Hand, C., Cardoso, A., & Smith, P. (2014). Does facial expressivity count? How typically developing children respond initially to children with autism. Autism, 18(6), 704–711.Google Scholar
  86. Stokoe, W. C. (2005). Sign language structure: An outline of the visual communication systems of the American deaf. Journal of Deaf Studies and Deaf Education, 10(1), 3–37.Google Scholar
  87. Sutton-Spence, R., & Woll, B. (1999). The Linguistics of British Sign Language: An introduction. Cambridge: Cambridge University Press.Google Scholar
  88. Szymanski, C. A., Brice, P. J., Lam, K. H., & Hotto, S. A. (2012). Deaf children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 42(10), 2027–2037.Google Scholar
  89. Thompson, R. L., Langdon, C., & Emmorey, K. (2009). Understanding the linguistic functions of eyegaze in American Sign Language. Paper presented at the CUNY Conference on Human Sentence Processing, March 2009, Davis, CA.Google Scholar
  90. Trevisan, D. A., Bowering, M., & Birmingham, E. (2016). Alexithymia, but not autism spectrum disorder, may be related to the production of emotional facial expressions. Molecular Autism, 7, 46. Scholar
  91. Volker, M., Lopata, A. C., Smith, D. A., & Thomeer, M. L. (2009). Facial encoding of children with high functioning autism spectrum disorders. Focus on Autism and Other Developmental Disabilities, 24, 195–204.Google Scholar
  92. Wang, A. T., Lee, S. S., Sigman, M., & Dapretto, M. (2006). Neural basis of irony comprehension in children with autism: The role of prosody and context. Brain, 129, 932–943.Google Scholar
  93. Watkins, L., Kuhn, M., Ledbetter-Cho, K., Gevarter, C., & Reilly, M. (2015). Evidence-based social communication interventions for children with autism spectrum disorder. The Indian Journal of Pediatrics, 10, 1–8.Google Scholar
  94. Wellman, H. M., Phillips, A. T., & Rodriguez, T. (2000) Young childrens understanding of perception, desire and emotion. Child Development 71(4), 895–912.Google Scholar
  95. Wing, L., Leekam, S. R., Libby, S. J., Gould, J., & Larcombe, M. (2002). The Diagnostic Interview for Social and Communication Disorders: Background, inter-rater reliability and clinical use. Journal of Child Psychology and Psychiatry. 43(3):307–25.Google Scholar
  96. Wong, N., Beidel, D. C., Sarver, D. E., & Sims, V. (2012). Facial emotion recognition in children with high functioning autism and children with social phobia. Child Psychiatry and Human Development, 43(5), 775–794.Google Scholar
  97. Woolfe, T., Want, S. C., & Siegal, M. (2002). Signposts to development: Theory of mind in deaf children. Child Development, 73(3), 768–778.Google Scholar
  98. Yirmiya, N., Kasari, C., Sigman, M., & Mundy, P. (1989). Facial expressions of affect in autistic mentally retarded and normal children. Journal of Child Psychology and Psychiatry, 30, 725–735.Google Scholar
  99. Zeng, Z., Fu, Y., Roisman, G. I., Wen, Z., Hu, Y., & Huang, T. (2006). Spontaneous emotional facial expression detection. Journal of Multimedia, 1, 1–8.Google Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Division of Psychology and Language Science, Department of Language and CognitionUniversity College LondonLondonUK
  2. 2.Division of Psychology and Language Science, Deafness, Cognition and Language Research CentreUniversity College LondonLondonUK

Personalised recommendations