Introduction

Children with profound sensorineural hearing loss (SNHL) experience delays in learning to understand the speech of others and to produce intelligible speech. The source of their delays is rooted in a lack of refined access to the spectral and temporal cues of the acoustic–phonologic components of speech. That is, to learn to understand the speech of others and to speak themselves, a young child must hear the sounds of speech. When armed with such access through hearing technologies and, through the influences of a highly dynamic system, children with SNHL can begin to take command of the basic structures of their native, spoken language (Smith and Thelen 2003). Without such access, these children face challenges in their cognitive and psychosocial development and academic performance. Together, such cascading consequences carry downstream implications for employment and quality of life (Summerfield and Marshall 1999; Cheng et al. 2000).

When traditional amplification devices (hearing aids) are unable to restore access to the full range of phonemic components of speech, a cochlear implant (CI) is a widely used treatment option for children with SNHL (Bradham and Jones 2008). CI stimulation of the auditory pathway is made feasible by robust reserves of auditory neurons that persist in deafness. On average, about half of the peripheral neuronal complement of the cochlea survives even when deafness is profound and of early onset (Nadol 1997). Furthermore, surviving neurons retain responsivity to electrical stimulation. Electrical contacts of a CI device implanted into the cochlea can generate currents that stimulate subpopulations of auditory neurons. When configured across channels (to convey pitch information), variations in the power and tempo of electrical currents can encode sound via spike trains carried by auditory neurons. Acoustic inputs are thus conveyed to CNS auditory stations for encoding. Advances in sound processors and related software have enhanced the fidelity with which complex sounds are processed into physiologically meaningful codes.

Evidence of basic perceptual gains following cochlear implantation is found in consistent improvements in hearing thresholds (Pulsifer et al. 2003). However, improved thresholds for sound awareness represent only a preliminary measure of the intervening effect of a cochlear implant. A vast range of levels of hearing and communication ability are observed in children who receive cochlear implants, and the true impact is measured by more consequential outcomes than awareness of sound.

Because early-onset deafness is typically recessive in its genetic transmission, or is acquired as a result of infection, most deaf children grow up in hearing households. Manual strategies of signing can overcome many barriers to communication; however, the prevalence of signing in society is limited, and the depth of engagement with sign language in hearing families does not systematically expose deaf children to abstract semantic content (Mitchell and Quittner 1996). The resulting communication mismatch between a deaf infant and a hearing family associates with higher levels of parenting stress, less developmental scaffolding, and reduced sensitivity in parent–child interactions, resulting in negative consequences for later linguistic and psychosocial development (Polat 2003; Nikolopoulos et al. 2004b).

The notion of a “successful implant” often relates to the parents’ perception of how their child will best relate to the outside world. Hearing parents who seek alleviation of their child’s deafness will commonly express a goal of providing their child with options for real engagement with the mainstream, specifically in play and school at an early age, and in vocational options and life chances in adulthood. The pervasive nature of communication, within the family and in society, suggests a standard metric of outcome. Because the goal of restored hearing in a deaf child is to enable useful hearing, a key measure of outcome should reflect how a deaf child’s experience with a CI develops into the effective use of spoken language. Parental surveys indicate that the outcome of their greatest concern after surgical intervention in children with SNHL is the level of spoken language achieved (Nikolopolous et al. 2004a).

Despite its importance, the study of language development in children with cochlear implants presents methodological challenges. Useful experimental approaches must address the high variability of language performance observed in pediatric populations in general and attempt to control for confounds in smaller implanted populations. Despite the methodological challenges associated with high variability in results, the potential for key research insights exists. Clinically, properly assessed variability offers an opportunity to understand modifiers of outcome of this treatment for deaf children and to predict and promote success in language acquisition. More general research questions related to neurodevelopment arise, as well; understanding the factors that contribute to variation in CI outcomes in this population offers insight into the interaction of influences that contribute to language learning in general. Children identified as candidates for CIs represent a population that has experienced significant auditory deprivation during a period when communication growth normally advances at an accelerated rate. With intervention and an ultimate restoration of auditory inputs, studies of developmental effects can offer key neurobiological perspectives (Pisoni et al. 2008; Smith et al. 1998). The enormous variation observed in measures of early speech and language development in children with a CI thus calls for multivariable assessment of intervening and modifying variables in which baseline variables are captured with accuracy and their longitudinal associations are measured in a way that averts floor and ceiling effects.

Prior clinical studies have supported the use of an early CI, though their case series designs have generally been limited by lack of generalizability, insufficient sample sizes to support conclusions, inability to account for confounds, lack of assessment of parental influences, and absence of parallel observations from a control group (Fink et al. 2007). Our current study attempts to address prior limitations. The Childhood Development after Cochlear Implantation (CDaCI) study is a prospective, multidimensional, multisite trial that examines several dimensions of language learning in children receiving a CI under the age of 5 years and uses normal hearing age-matched controls. A longitudinal design enables the development of growth curves examining modifiers of language learning. The design of the CDaCI study also enables adjustment based on factors known to contribute to language learning and examination of novel predictors of implant outcomes, such as quality of parent–child interactions.

Here, we summarize findings that demonstrate prognostic value for language learning for variables of baseline language development, parent–child interactions, and age and hearing level at the time of CI surgery. Against a background of the dynamics of language learning and epigenetics, we consider a multivariable model of predictive and modifying factors that demonstrate significantly improved developmental trajectories for verbal language in children implanted before 18 months of age in the context of epigenetic control of language learning and variable impact of modifying factors across age-at-implantation groupings of CDaCI participants.

Language learning

Language can be defined as “an internalized, abstract knowledge that is the basis for communication” and functions as “a window on thought” (Jackendoff 1994). Language provides the tools to reveal ourselves to others in establishing and maintaining relationships and drives perceptual learning that contributes to cognition.

Language arises through successive, organizational adaptations. Thus, the study of how children learn a language brings together myriad influences and activities that enable a child to become linguistically engaged (Mellon 2009). Children learn spoken language by developing through knowledge and skills based in the phonology of the sound system, semantics (meaning), the rules of grammar, and the pragmatics of interaction (Rescorla and Mirak 1997). A child’s eventual mastery of language entails a timely convergence of the systems of skills. Mastery in each system contributes to full communicative competence (Rice 1989) as language acquisition flows from the effects of relative success in one sphere (e.g., phonology) on others (e.g., vocabulary, morphosyntax, and pragmatics).

Early stages of language learning require a child to extract acoustic representations from speech streams. Through such experiences, a child discovers regularities that enable meaning and insight into grammatical rules of spoken language. A typical infant’s first year’s experience entails behavioral and innate perceptual abilities that provide a framework for later acquisition of language. Because speech production is not yet manifest, infants and young toddlers with SNHL are likely to go undiagnosed during this period and hence may remain isolated from early linguistic experience (Marschark 1997). Unfortunately, the delay in exposure to appropriate language models is often reflected in poor language outcomes (Yoshinaga-Itano et al. 1998). Age of acquisition affects language outcomes regardless of modality. Both signed and spoken languages appear subject to timing constraints for optimal learning (Mayberry and Fischer 1989; Newport 1988, 1990; Padden and Ramsey 1998). If language is introduced after this period, deaf children typically must be painstakingly taught language instead of the experience-based acquisition language that characterizes typical development (Bench 1992). As this process is less efficient, most hearing-impaired children will be unable to fully overcome the linguistic, social, and cognitive challenges associated with delayed exposure to language (Vernon and Wallrabenstein 1984; Vaccari and Marschark 1997).

Cochlear implants can improve access to ambient language but are usually provided at ages after early development stages for the domains of language have begun. For example, ordinarily, a toddler of 3 years understands three fourths of the vocabulary that will ultimately support his daily conversation (White 1979). By age 4, most children have achieved sufficient mastery of the phonological, grammatical, and pragmatic systems to be considered a native speakers or signers (Crystal 1997).

Epigenetic background

The human cortex has substantial potential for epigenetic modification of function (Panksepp J. 2000; Sur and Leamey 2001). An example comes from studies of the rodent visual system demonstrating that vision can be rescued following removal of the visual cortex in utero possibly due to the epigenetic modifications that reorganize the surrounding parietal cortex, which then takes on a visual function (Horng and Sur 2006). Similarly, human studies have demonstrated the ability to recover fundamental aspects of vision in congenitally blind adults who have their vision restored (Ostrovsky et al. 2009; Mandavilli 2006). The same phenomenon has been observed in the auditory system, where cortical reorganization compensates for alterations induced by cochlear dysfunction (Rajan et al. 1993; McDermott et al. 1998). We can extract from this the possibility that cortical functions, including the encoding of sound that subserves spoken language acquisition, are modified by epigenetic modifications (Sur and Rubenstein 2005). Auditory information is initially collected by the external ear and transmitted through the middle ear to the inner ear where the information is converted from vibrations in the cochlea to electrical signals in the auditory nerve. We are born with a developed cochlea, and brain processing of information from the cochlea develops substantially in postnatal periods (Gordon et al. 2008). This period of CNS modification entails adjustments that are likely guided by a combination of environmental exposure and epigenetic expression.

During postnatal development of the central nervous system, synaptogenesis plays a key role in learning and synaptic connectivity is the primary neuronal correlate of the representation of knowledge within the brain (Elman et al. 1996; Kral and Eggermont 2007). During this period, cognitive development is determined largely by experience as gene expression specifies the function and ultimate fate of neurons and their synaptic connectivity. With changes in synaptic number and patterns of connectivity, inputs to cortical regions and thalamic nuclei and modulatory controls are established. Such connections transmit neurochemicals implicated in states of arousal and reward. While such models fail to account for a lockstep table of correlates between language learning and correlates of neuronal connectivity, we can infer that certain stages of neurodevelopment set the stage of time-sensitive readiness for learning based on perception and amenability to experience-driven change (wherein learning itself contributes to the complexity of brain structure).

Examination of the transcription factors involved in synaptic modification have demonstrated the important epigenetic role of chromatin in neuronal function as well as the function of transcriptional programs that ultimately direct synaptic maturation, the definitive regulator of sensitive periods (Hong et al. 2005). Genome-wide analyses suggest, for example, that the activity-dependent, ubiquitously expressed transcription factor MEF2 regulates a transcriptional program in neurons that controls synapse development (Flavell et al. 2008). The role of interneurons and their associated proteins has also been discussed in the regulation of these periods (Morishita and Hensch 2008). The postnatal environment appears to have a large impact on the length of sensitive periods for development. For example, when rodents are separated from their mothers but subsequently put in an enriched environment, the effects of separation (e.g., stress responses and poor cognitive performance) are normalized (Mohammed et al. 1993; Nithianantharajah and Hannan 2006). This reversal suggests the ability of appropriate environmental cues to promote epigenetic changes that rescue normal cognitive function (Francis et al. 2002; Hannigan et al. 2007). The environmental impact on cognitive function and learning ability is also evident in rodent studies. Socially enriched environments increase exploration of novel environments as well as the rate of conditioning whether the rats spent time with biological or foster mothers (Kiyono et al. 1985; Dell and Rose 1987).

Caregivers, especially mothers, have extended prenatal and postnatal interactions with their children with direct implications for behavioral phenotype. During gestation and after birth, maternal health status and care of offspring has substantial effects on exploration of novel situations and generalized social behavior (Weinstock 2005; Martin-Gronert and Ozanne 2006; Meaney 2001; Chapman and Scott 2001; Parker 1989; Pederson et al. 1998; Vanijzendoorn 1995). Interestingly, higher levels of parenting stress have also been documented in hearing parents of deaf toddlers and preschoolers (Quittner et al. 2010).

Studies of licking and grooming behaviors in rodents offer evidence for nongenomic transmission. This behavioral repertoire is acquired through the maternal care of offspring (Weaver et al. 2004; Champagne et al. 2003a; Fleming et al. 2002). Cross-fostering studies of rodents demonstrate plasticity in these generational trends and indicate that the phenotype arises from environmental exposure rather than genetic predetermination (Maestripieri et al. 2005). Even though genetics are not fundamentally altered, persistent behavioral changes continue into adulthood and are observed to be associated with neurobiological modifications such as oxytocin receptor density—a marker that correlates with rodent licking and grooming (Champagne 2008; Champagne et al. 2003b).

Multiple mechanisms by which epigenetics can influence development of cortical regions have been identified and more are likely to be found. For example, DNA methylation is a heritable modification of genomic DNA. Patterns of DNA methylation may play a large role in controlling development, imprinting, transcriptional regulation, chromatin structure, and overall genomic stability (Okano et al. 1999; Strathdee and Brown 2002). Methylation can prevent access of transcription factors and RNA polymerase to DNA as well as attract protein complexes which act to silence genes (Strathdee and Brown 2002). Quantitative assessment of DNA methylation levels suggest that DNA methylation signatures distinguish brain regions and may help account for region-specific, functional specialization (Ladd-Acosta et al. 2007). This model offers one mechanism wherein phenotypic plasticity is manifest—cell’s ability to change their behavior in response to internal or external environmental cues (Feinberg 2007).

Epigenetic models of learning

Epigenetic models offer paradigms for understanding the acquisition of a skill set that is shaped by ongoing learning wherein learning itself affects the subsequent ability to learn something new. One application of such a model describes the “cognitive development” of robots that learn through a developmental algorithm. Emergent, self-programming allows a robot to continuously expand its functional capacity based on experiences by previously acquired skills (Pfeifer et al. 2007). Here, robotic operation is guided by a software program (or ‘genome’) that is inherently modifiable by developmental experiences (creating an ‘epigenome’). The overall result is a model of learning in which learning itself affects the later capacity of the brain to acquire new information.

Three components are considered necessary for a robot to accomplish ongoing, emergent abilities: (1) abstractions—to focus attention on relevant inputs, (2) anticipation—to predict environmental change, and (3) self-motivation—to push beyond extant capacity toward more complex understanding (Blank et al. 2005). Such models of robotic learning have been used to model infant–caregiver interactions (Breazeal and Scassellati 2000), as well as language development (Cangelosi and Riga 2011). Robotic models reveal how the principles of language development and epigenetics can be successfully merged. Both the environment and innate factors contribute to one another in a dynamic fashion to promote language learning through biological motivation, multidimensional experiences, and as influences from bidirectional interactions.

Multivariable analyses of language learning after early cochlear implantation

The CDaCI study has followed 188 children with SNHL who received cochlear implants at six collaborating sites and a control group of 97 normal hearing children who were recruited from early elementary schools affiliated with two of the cochlear implant programs (Fink et al. 2007). Children were recruited to the study in 2002–2004. Participant’s raw scores and age-normed level of comprehension and expression of spoken language comprise the primary outcome variables assessed by the CDaCI study. Secondary outcome variables relate to measures of hearing and speech recognition, cognitive and psychosocial development, and participating families’ perception of their child’s developmental progress.

Characteristics of the CDaCI participants are described elsewhere (Fink et al. 2007). The etiology of hearing loss was unclear in a large minority of the 188 children with SNHL who received cochlear implants and genetic testing is unavailable. However, a majority of the participants were diagnosed at birth with a severe-to-profound SNHL (n = 116) and these children’s results to date form the focus of this report.

Speech recognition

Speech recognition, a direct measure of the auditory benefit of the cochlear implant, has been evaluated with hierarchical measures that compare implanted children with aged-matched children with normal hearing (Eisenberg et al. 2006; Wang et al. 2008). Children with CIs have demonstrated progress in speech identification within a single year of implantation and approached testing levels seen in normal hearing controls. Though there was a high degree of variability, some children were able to identity sentences even when perceptual demands were increased by a semantic distractor.

The advent of bilateral cochlear implantation has emerged since the CDaCI study was launched in 2002. In the hopes of achieving the benefits of binaural hearing (directional hearing and improved understanding of speech in noise), bilateral implants have now been placed in 40% of children in the CDaCI study. Early data indicate that bilateral CI hearing confers significant advantages in emergent speech recognition and in language learning (Niparko et al. 2010). These initial results set up further longitudinal study of higher order outcomes that rely on restored access to acoustic–phonetic inputs in unilateral v. bilateral implant conditions. Specifically with respect to bilateral inputs, emerging evidence indicates that speech is processed bilaterally in auditory cortical areas and that critical, complementary analyses of the speech signal are carried out across the hemispheres to enable beam-forming and other higher order processing (Poeppel 2003; Millman et al. 2011). It will be important to understand the potential for bilateral implants to entrain such processing.

Spoken language outcomes

Both comprehension and expression of spoken language are important markers of parent-perceived success of a CI (Geers et al. 2008). The development of auditory processing is a developmental prerequisite of the phonological learning that subserves the acquisition of spoken language. The ability to recognize speech represents an integration of sensory, linguistic, and cognitive processes that involve acoustic–phonetic identification and lexical access from memory. When hearing is degraded by early-onset sensorineural hearing loss, the ability to make the fine acoustic–phonetic distinctions is compromised. Cochlear implant candidates evaluated in the CDaCI study are children whose level of sensorineural hearing loss did not allow for growth in spoken language despite the use of hearing aids. Reports that detail the trajectory of development of auditory processing capacity after cochlear implantation in the CDaCI study have been published elsewhere (Eisenberg et al. 2006; Wang et al. 2008).

In the present analysis, children were first stratified into two subgroups based on whether they had developed some spoken language skills, defined as having a comprehension or expression standard score ≥70 from the Reynell Developmental Language Scales (RDLS; Reynell and Gruber 1990) evaluated at baseline (before implantation). Children who had not yet developed significant spoken language skills at baseline were further stratified based on the age when their cochlear implants were activated. Table 1 lists characteristics of the 116 children assessed in this report. In the first column, characteristics and associations in 20 of the 116 children with measurable language (based on Reynell Language test results ≥70) at baseline (prior to CI) are listed. Columns 2 and 3 of Table 1 list the characteristics of children whose implant was activated at <18 months (n = 34) and >18 months (n = 62), respectively, all of whom had no measurable spoken language at baseline.

Table 1 Baseline characteristics of 116 young children with congenital hearing loss who received cochlear implants before 5 years of age in the Childhood Development after Cochlear Implantation (CDaCI) study

Table 2 presents results from 116 CDaCI study children with congenital SNHL evaluated with the Comprehensive Assessment of Spoken Language (CASL) metric at 4–5 years after implant activation. Fifteen core tests comprise the full CASL (Carrow-Woolfolk 1999) assessing four language structure categories: Lexical/Semantic, Syntactic, Supralinguistic, and Pragmatic in receptive and expressive format, through the age of 21 years. Four core tests are age-appropriate for CDaCI participants assessed 4–5 years after implantation: Antonyms, Syntax Construction, Paragraph Comprehension of Syntax, and Pragmatic Judgment. The Antonyms test assesses the lexical/semantic aspect of language development through measuring the retrieval of spoken single words and the oral vocabulary needed to produce them. Thus, performance on the Antonyms test depends on both receptive and expressive oral vocabulary. The Pragmatic Judgment test evaluates the pragmatics of language development in both receptive and expressive format, while the Paragraph Comprehension of Syntax and Syntax Construction tests measure the syntactic aspects of receptive and expressive language, respectively. A core composite score based on results of these four age-appropriate core tests is also examined in Table 2.

Table 2 Multivariable-adjusted (adjusted for all other variables in the table and further adjusted for gender, race, and ethnicity) mixed effects modeling analyses for standard scores of the Comprehensive Assessment of Spoken Language acquired after 4–5 years of experience with cochlear implant among 116 young children with congenital hearing loss in the CDaCI study

After 4–5 years of implant use, the subgroup of children who acquired some spoken language skills at baseline before implantation, regardless of when they received and activated their cochlear implants, showed significantly higher average scores in all four language subdomains than subgroups who had no measurable language at baseline and were activated after 18 months of age (Table 2). In contrast, these children demonstrated language skills, on average, similar to or only slightly better (Antonyms, p < 0.05) than children who were naive to spoken language at baseline and had their implant activated under 18 months of age.

Among children who were naive to spoken language skills at baseline, the subgroup who had implants activated before 12 months of age (N = 6) exhibited the highest level of average spoken language performances across all four CASL core tests, with average standard score near or above the average score of 100 established as the mean, age-normed level—significantly higher than those of children who acquired some spoken language at baseline (data not shown).

Among children who had not acquired significant language skills prior to implantation, those who had their implants activated at 18 months of age or older had the lowest average standard scores across all four CASL subdomains compared to their counterpart who had implant activation prior to 18 months of age, with syntax construction and pragmatics standard scores that averaged more than 2 standard deviations below the norm. The difference in skill development for each of the language subdomains between these two subgroups was highly significant (all p < 0.01). However, trends in the development of syntax expression and reception and pragmatics suggest that developments within these subdomains are more sensitive to age of implantation than is the acquisition of vocabulary (Table 2). Gender differences were not significant, although girls do show mean estimates that are slightly higher in measures in Table 2.

Table 3 summarizes the patterns of spoken language development using the core composite standard score according to the age-appropriate language subdomains. Of note, hearing variables, maternal sensitivity, and SES were factors that carried variable predictive value for performance outcome as measured for the core composite across the three groups of children assessed.

Table 3 Multivariable-adjusted (adjusted for all other variables in the table and further adjusted for gender, race, and ethnicity) mixed effects modeling analyses for core composite of the Comprehensive Assessment of Spoken Language (CASL) acquired after 4 to 5 years of experience with cochlear implant among 116 young children with congenital hearing loss

In agreement with prior single-center and convenience sampled studies, CDaCI results have revealed that the CI produces consistent effects in improving the trajectory of spoken language learning and that age of implantation is a significant predictor of the level of spoken language skills acquired (Niparko et al. 2010; Tobey et al. 2011). Children who received their implants before 18 months of age exhibit language performance scores that remained roughly within 1 standard deviation of their normal hearing peers, while children who were older at the time of implantation exhibit significantly larger gaps between their language scores and the scores predicted by their chronological age, both at baseline and over the periods of 3 (Niparko et al. 2010) to 6 years (Tobey et al. 2011) after a CI is placed.

Observations of superior language development that are associated with earlier access to speech information are consistent with concepts of optimal periods for auditory-based learning, which suggest that early auditory exposure is necessary for neuronal commitment to support the auditory processing of complex signals (Kral 2007). While these results demonstrate clear advantage of early treatment, the prospects for such neurodevelopmental deficits to recover will require further longitudinal assessment (Ehninger et al. 2008).

These observations lend support to a mode of language development that emphasizes the importance of period-sensitive, auditory-based learning. Deaf children who gain early auditory exposure in an early, optimal periods show improved subsequent language development relative to deaf children with reduced early auditory exposure before hearing is electrically restored, and even more so than those deaf children whose auditory capacity is restored toward the end or outside of such an optimal period. These observations are consistent with observations that early auditory exposure is necessary for neuronal commitment to support the auditory processing of complex signals (Kral 2007).

Environmental factors play a measurable role in early language learning as revealed in CDaCI data (Niparko et al. 2010). Annual family income of less than $50,000 was associated with lower average standard scores across subdomains with the strongest association observed for the Antonyms scores (p < 0.01), while associations with other subdomains did not reach statistical significance after multivariable adjustment. This observation is consistent with previous CDaCI studies wherein significantly reduced language learning rates associate with lower socioeconomic status (SES) (Niparko et al. 2010). These findings are consistent with prior studies that suggest that children from lower SES homes are exposed to a narrower range of language due to reduced parental attention and talking (Walker et al. 1994; Hart and Risley 1995). Children in lower SES households receive less encouragement to talk and ultimately experienced deficits in language and academic performance when they enter school.

Together, these data indicate that outcomes differ after the introduction of electrical hearing with a CI as modified by a range of factors related to hearing and environmental experience. Importantly, the impact of modifiers seems to carry different effects depending on the timing of onset of electrical hearing, with different levels of effects exerted across language subsystems.

We note that results were based on subgroups analyses with selected variables included in the model and not all of the described patterns were statistically significant. Since the subgroups had smaller sample sizes, it is not practical to include as many variables as in larger analytic models (Niparko et al. 2010). Trends observed may have also been subject to floor effects of the measured language performance results.

Cognitive and psychosocial development

Early childhood deafness has been associated with disruptions in the development of cognitive processes related to focusing attention. Studies of deaf children with both hearing aids and cochlear implants have consistently documented deficits in visual, selective attention (Smith et al. 1998; Quittner and Opipari 1994), with concomitant increases in externalizing behavior problems (Mitchell and Quittner 1996; Carr and Durand 1985). These findings are counterintuitive since the visual system develops normally in this population and better visual attention would be adaptive for interpreting signs or other forms of visual communication. This is likely due to a lack of integration of the visual and auditory systems early in brain development and the need for the young deaf child to “monitor” his/her environment visually rather than auditorally (Smith et al. 1998). Two studies have now shown that cochlear implants are associated with improvements in visual, selective attention (Smith et al. 1998; Quittner et al. 2008); in the CDaCI study, improvements in visual tracking and attention were among the earliest, positive effects of cochlear implantation (Quittner et al. 2008).

A key domain assessed by the CDaCI study is the participant’s cognitive, social, and emotional development as it relates to language acquisition. For example, one CDaCI study has reported on the development of symbolic play in both deaf and hearing children, using videotaped assessments of play without adult mediation (Quittner et al. 2011). In typically developing children, language emerges in parallel with the use of symbols in play (e.g., using a peg to represent a person). Because these two skills develop in synchrony, it is difficult to determine whether they reflect a common, underlying ability or whether some level of language is needed to facilitate symbolic representation (Thal and Bates 1989). Results indicate that at baseline, deaf CDaCI participants were delayed in achieving symbolic play when compared to hearing children. However, restoration of auditory input with cochlear implantation in children before the age of 2 was associated with greater achievement of symbolic play and was similar to hearing controls. Furthermore, growth of oral language was positively and significantly associated with the acquisition of symbolic play in deaf children using cochlear implants.

Evidence across a variety of studies indicates that significant disruptions may occur in parent–child interactions among hearing mothers of infants and toddlers with SNHL (Meadow-Orlans et al. 1997; Musselman and Kircaali-Iftar 1996; Quittner et al. 1990; Quittner et al. 2010). Observational studies have shown that, relative to mothers in either hearing or deaf dyads, hearing mothers of deaf children tend to be more controlling in their verbal and nonverbal interactions (Musselman and Churchill 1991), spend less time in coordinated joint attention with the child (Waxman et al. 1996), and have greater difficulty responding to the child’s emotional and behavioral cues (Swisher 2010). The consequences of such disruptions may include less secure attachment, difficulties sustaining attention and exerting behavioral control, and slower development of communicative competence (Reivich and Rothrock 1972; Bornstein 1990; Quittner et al. 2004; Lederberg et al. 2000; Spencer et al. 1991).

Early in development, the quality of parent–child interactions is a key source of emotional attachment, scaffolding the development of communicative, cognitive and behavioral skills (Vygotsky et al. 1972; Bakeman and Adamson 1984; Sroufe et al. 1999). Quality of parent–child interactions has been shown to provide a critical foundation for overall development. In the NICHD Early Childcare Study, a nationally representative, hearing sample of children followed from birth through late adolescence, maternal sensitivity was a critical predictor of children’s cognitive, social, and behavioral development (NICHD 1999). Maternal sensitivity is a construct which measures warmth, positive regard and respect for autonomy in the parent–child relationship. In the CDaCI study, maternal sensitivity was measured in videotaped structured and unstructured tasks. At baseline, mothers of deaf children were less “sensitive” in their interactions with their deaf children than mothers of hearing children. Furthermore, maternal sensitivity was significantly associated with better growth in language learning, after controlling for SES and other child characteristics (Niparko et al. 2010; Quittner et al. 2011). As shown in Tables 23, regardless of subgroups, higher maternal sensitivity to the communication needs of a deaf child at baseline is significantly associated with greater spoken language development to a similar degree across all four CASL subdomains.

Raising a deaf child is associated with significant parental stress due to the substantial long-term challenges of communication, education, and health care (Lederberg and Golbach 2002). Consistent with observations of the disruptive effects of SNHL, Barker et al. (2009) observed within the CDaCI cohort of deaf children significant behavioral problems as measured on the well-validated Child Behavioral Checklist at baseline. As language skills improved, behavior problems diminished over the 3 years of longitudinal measurement (Romero 2001). Furthermore, high levels of parenting stress are associated with poor social and emotional development as well as increased behavioral problems (Crnic et al. 2002). Parents in the CDaCI study experienced increased context-specific stress associated with raising a child with hearing impairment when compared to hearing parents (Quittner et al. 2010). Higher levels of parenting stress were also negatively related to language development in the deaf sample. Language skills are likely to affect interactions between children and their parents because: (1) language facilitates children’s regulation of attention, emotion, and behavior, and (2) language skills facilitates communication with parents, enabling better interactions and reduced levels of family stress (Quittner et al. 2010).

Given the impact of such disruptions, a parent’s perception of their child’s progress in development would seem to be a key predictor of active engagement with post-implant rehabilitation. Lin et al. (2008) studied the association between several commonly used outcome instruments and a measure of parental perceptions of development to gain insight into how our clinical tests reflect parental perceptions of a child’s developmental status. Associations between parental attitude and clinical outcomes were subject to child-age variations, but outcome measures were positively associated with parental perceptions of development, with the most robust associations showed positive correlations with measures of spoken language.

The CDaCI study continues to measure parents’ annual ratings of their child’s global health-related quality of life (HRQoL) and developmental status using visual analog scales (Fink et al. 2007). Results to date parallel those related to the general level of functioning across WHO based ageappropriate domains (Clark et al. 2011). Longitudinal surveys indicate that the developmental deficits of children who were CI candidates compared to NH children, are improved across the cohort 3 years after CI, with the greatest improvement observed in deaf children implanted before 18 months of age. Such parental perspectives on HRQoL and development provide practical insight into the optimal timing of interventions for early-onset deafness, especially in the context of parental expectations of their child’s ability to effectively acquire spoken language after a CI.

Discussion

In the CDaCI study, we have observed the effects of an apparent sensitive period such that greater benefit for spoken language acquisition after a CI is significantly associated with earlier implantation. Based in this prospective dataset, significantly greater trajectory of spoken language learning occurs in children implanted in infant and early toddler stages relative to implantation in later toddler stages. Outcomes, however, are significantly modified by a range of factors based in a child’s pre- and post-implant experience. Our observations are consistent with a growing body of evidence that epigenetic modification of the CNS subserves periods for learning of complex tasks such as those related to learning the subsystems of spoken language and ultimately are important in, if not definitive of, effective language comprehension and expression.

Sensitive periods in the development of auditory cortex terminate with reductions in overall synaptic activity and are associated with an inability to completely restore hearing function (Kral 2007). Changes in synaptic plasticity are likely due to genetic timing of brain sensitivity to language combined with epigenetic features that are guided by the availability of adequate sensory input (Kral 2007; Panksepp 2008). Though the closure of a sensitive period that occurs without development of auditory circuits is evident in cat models of congenital deafness, exposure to auditory stimuli by means of cochlear implantation appears capable of producing evoked potentials in more cortical areas, at higher amplitudes, and with the longer latency responses that resemble those of normal-hearing cats (Klinke et al. 1999). This suggests the ability of cochlear implantation to restore or potentially preserve normal auditory input to cortical areas. EEG studies of auditory-evoked potentials have also demonstrated normal latencies of cortical responses in children implanted (CI), but only if they received an implant before 3.5 years of age, suggesting a watershed age of implantation that affects the capacity for cortical processing (Sharma et al. 2007).

Two key observations are of interest to the development of an epigenetic model of spoken language development when hearing restoration with a CI is pursued: (1) elements of the language system (e.g., phonetics, vocabulary, grammar, and pragmatics) appear to be differentially affected by delayed exposure to spoken language and by modifying factors, and (2) delayed exposure can cause disruptions in the social/affective process of parentally guided language learning.

The significance of sensitive periods within this model comes from the possibility of a CI to restore normal auditory learning capacity in the context of cortical plasticity. We observed trends of dissociation between the domains of vocabulary and receptive syntax vs. expressive syntax, with implantation prior to 18 months having a larger positive impact on the development of receptive and expressive syntax than on vocabulary acquisition. Importantly, children who received a CI prior to 18 months of age also demonstrate relatively strong development of expressive syntax and pragmatic use of spoken language; implantation at later stages of toddler development was associated with vulnerabilities in pragmatics and expressive syntax. Though generally highly associated in their developmental patterns (Bates and Dick 2002), impaired acquisition of grammar relative to vocabulary has previously been noted in deaf children, suggesting the potential for differential development across the subdomains of spoken language (Tomblin et al. 2007).

Children with hearing loss possess specific deficits in grammar development that are similar to children with specific language impairment, demonstrating that such grammar-specific deficits can be observed in children with cortical neurosystems that developed in the presence of normal auditory inputs (Norbury et al. 2001; Briscoe et al. 2001; Watkins and Rice 1994). These results suggest that, in children with hearing loss, normally linked dimensions of language can become dissociated from one another. The aspects of learning specifically associated with grammar must be analyzed to understand the basis for this dissociation.

As events in the real world generally result in the stimulation of multiple sensory modalities (e.g., auditory and visual), it is important to consider that developmental outcomes may reflect interactions between the auditory system and other sensory modalities (Kral et al. 2000). Multisensory integration can be thought of in terms of salience—the ability of a stimulus to capture attention. Multisensory inputs may enhance the salience of a particular stimulus that would have otherwise evaded detection and subsequent response. These interactions are therefore most relevant when a stimulus has low salience (Calvert et al. 2001). Detecting and subsequent learning of the rules of grammar relies on attention to the more subtle “little words” and (morphosyntactic) endings of words and phrases (Bates and Dick 2002). Thus, reduced access to acoustic–phonetic cues may inhibit the natural attentional enhancement of grammatical cues (Dick et al. 2001; Singer Harris et al. 1997).

Having considered the importance of multisensory integration of auditory and visual cues, we can consider its relationship to sensitive periods. Though auditory perception is restored with cochlear implantation, these co-activated processes in deaf children demonstrate a bias toward visual rather than auditory stimuli (Bergeson et al. 2005). The persistence of a visual bias suggests that multisensory integration may not develop normally when a single sense dominates in early development. Auditory stimulation in an early sensitive period may, therefore, be necessary to ensure adequate influence on central circuits that enable multisensory integration. Cochlear implantation within the first year of life may rescue these circuits and enable matching of auditory and visual cues (Bergeson et al. 2010). Evidence for this comes from an examination of implanted congenitally deaf children who were more likely to fuse auditory and visual information processing if they received their cochlear implant within 2.5 years of age (Schorr et al. 2005). This observation reinforces the idea that early, effective auditory stimulation is necessary to establish multisensory connections and preserve the attentional resources necessary for learning in the subdomains of spoken language.

Our observations suggest evidence that the auditory system communicates with the visual system in circuits that are established early on and affect learning within language subdomains. Detection and learning of grammar requires multisensory interactions due to the low perceptual salience of grammatical cues. We suggest that, unlike vocabulary, grammar substantially improved for the group of CI who received implants prior to 18 months because the early activation of auditory cortex was able to rescue the development of multisensory integration circuits that ultimately amplified the salience of grammatical cues.

Observations gained from the CDaCI study can also be considered in the context of an epigenetic perspective by considering the multiple ways a child interacts with her environment, specifically the impact of limited verbal language on parent–child interactions. The development of language necessitates and derives from encounters with the world through childhood (Panksepp 2008). The affective components of these experiences have a measurable impact on the trajectory that language development follows. Joy from play, nurturance from care, and panic from separation distress are just a few of the many emotional aspects of the relationship between the mother and child that shape language development (Schore 2003; Trevarthen 2001). Such experiences associate with a child’s desire to engage with the world in an exploratory fashion, which is inevitably accompanied by exposure to a diverse range of sounds, including utterance material (Panksepp 2008). “Motherese,” the high-pitched, melodic, and repetitive form of speech with exaggerated intonation, appears well-suited for the acquisition of language (Fernald 1989; Trevarthen and Aitken 2001). While this form of the speech has been known to engage infants, it further appears that it typifies the affective bond shared between mother and child and plausibly promotes profound neurobiological changes to support the development of language.

Aspects of motivation are critical to an understanding of a model by which epigenetic changes are associated with parental nurturing to promote the development of spoken language (MacLean 1990). Self-motivation is highly associated with activity of the anterior cingulate regions that appear to enact social–emotional response. Activity within these regions associates both with experience of separation distress and the formation of social bonds (Panksepp 2003). Interestingly, bilateral damage to the same regions results in akinetic mutism, a deficit of language despite adequate motor function (Devinsky et al. 1995). This suggests the potential of these regions to “gate” interactions between affective interactions during childhood and development of lifelong language skills. Though neocortical regions ultimately process linguistic information, it is important to note that non-linguistic areas can provide attention and motivation in promoting, or inhibiting, language-associated activities (Panksepp 2008).

Recent discoveries in molecular genetics have begun to elucidate the patterns of genetic expression that underlay emergent CNS circuitry that supports language learning. For example, one gene that has been implicated in language (FOXP2) is concentrated in the basal ganglia. Evidence from songbirds suggests that this gene’s product may be necessary for trial and error vocal learning (Scharff and Haesler 2005; Ölveczky et al. 2005). Motivation to pursue trial and error for such exploration is essential for acquiring language and is likely dependent on encouragement derived from supportive, affective social interactions. Preliminary evidence suggests that FOXP2 may impact neuronal plasticity in an epigenetic fashion. Regulating mRNAs support neurite outgrowth and synapse formation of circuits that are involved in motor learning in rodents and song learning in birds (Fisher and Scharff 2009; Vernes et al. 2011). Furthermore, FOXP2 expression is associated with auditory inputs. Mutations in FOXP2 in rodents appear to specifically affect either synchrony of synaptic transmission from the cochlea to the auditory brainstem or the activation of auditory nerve fibers that carry auditory information to the brainstem (Kurt et al. 2009).

Epigentic modifications in these same subcortical regions demonstrate a possible mechanism that controls specific cortical functions. Selective lesions to cholinergic system in the basal forebrain of rats results in shift from long-term potentiation to long-term depression—a transition that is accompanied with a loss of synaptic plasticity in the visual cortex (Kuczweski et al. 2005). Such observations suggest mechanisms by which epigenetic modifications may influence the duration of sensitive periods (Hanganu-Opatz 2010; van Ooyen 2011).

We can hypothesize a basic mechanism by which experience acts through epigenetic means to promote cortical differentiation and regulate sensitive periods. The results of the CDaCI study fit well into the proposed model. The maximal effect of implantation is seen in the group implanted earliest, suggesting a sensitive period that begins to close for the other groups that experienced constrained access to the key acoustic–phonetic perceptions that normally initiate spoken language learning early in life. Selective effects on the domain of grammar highlight the role that attention likely plays in the acquisition of grammar skills. The necessity for the environment to provide sensory information and for this information to be recognized by the nervous system appears to be absolute, though a critical time frame exists during which intervention allows at least partial recovery of function. Ongoing and emerging factors contribute to early development of behaviors of interest in the CDaCI study, with the primary outcome variable being the development of spoken language. There are important contributions to language development from multiple sources (family, social interactions) as well as synergistic effects of one developing system on another (e.g., low language level affecting behavioral organization).

Our observations of the key role played by parent–child interactions in shaping outcomes after a CI provide the most powerful example of how epigenetic changes could be regulated by the environment. A bidirectional relationship can be hypothesized between language development and parent–child interactions. In a child with SNHL, although innate language systems may be intact, with a sole deficit located in the perception of sound, the child may have either an inadequate store of utterance material or inadequate experience with meaning-interpretation experiences to fully engage with language tasks. A child’s cognitive skills, parent–child interactions, social adjustment, behavioral skills, parental well-being, and social skills interact within the home milieu early on and, over time, with information in the outside environment, all are also nested within the framework of environmental experience affected by socioeconomics and societal influences.

The appropriation and command of spoken language directly help children regulate their attention, and to communicate in ways that affect emotion and behavior and facilitates caregiver and, later, peer communications to enable further refinement and nuanced use of spoken language. When a child’s command of language is lacking, the result is inevitably impairment in communication with parents and heightened risk of greater parental stress. Parental perception of their child’s language skills therefore predictably results in a change in the way they interact. Such parental interpretation of their child’s abilities and how this, in turn, affects the development of further verbal (and written) interactions are key questions that can be answered with longitudinal follow-up.

In the same way that a rodent raised without licking and grooming undergoes epigenetic changes that ultimately affect behavior, one can hypothesize that a young child developing without sufficient affective and social interaction may experience epigenetic modification that closes optimal periods by inhibiting synaptic plasticity. Consider, for example, a mother who is frustrated by a perceived lack of language development in her child with a recent CI. Her interpretation may prevent her from using “motherese” and aggressively communicating with her child through speech as she otherwise would have done. Data from the CDaCI study, as well as those from field studies of hearing children, indicate that the lack of such affective stimulation can stifle motivation for the child to speak and to explore novel applications of spoken language. Furthermore, as neurobiological changes decrease attention directed at language, neurobiological observations suggest that there is likely an associated diminution in synaptic plasticity that will ultimately inhibit future progress in language acquisition. In this model, the result can be a harmful circle of poor language skills causing parental stress and disappointment, with resultant negative and multidimensional influences on the development of spoken language skills in a child with a CI.

This model demonstrates the clinical importance of promoting parental support and intervening when a communicatively inactive home environment and parental stress are detected. Parents of children with hearing loss love their children and, though they seek to nurture them in different ways, it is essential that they are encouraged to emphasize the same language-based affection provided to children with normal hearing. Additionally, this model provides a concrete example for hypothesized environmental impact on cortical function and plasticity. We envision multiple epigenetic changes, such as one regulating attention to language based on affective social interactions, combine to impact the development of higher order cognitive functions of spoken language after surgical intervention in deafness.

Summary

A convergence of the biological, cognitive, and communication sciences potentially unifies our approach to the complexities of developmental learning. Within a multidimensional, epigenetic framework, this report addresses childhood acquisition of spoken language after cochlear implantation—a process that represents an interplay between general learning mechanisms, auditory perception, and ongoing environmental experience with the statistical regularities of auditory input. From such an interplay, a child gains operational insight into the meaning and communicative intent conveyed by the sounds of speech of others.

The CDaCI study represents variance in naturally occurring circumstances that affect language learning reflected in inhomogeneities in the baseline biological factors and the environments of participants. In such variability, however, are opportunities to identify dependent variables of clinical importance in addressing how SNHL-challenged children can learn to receive and produce more adequate speech and language. CDaCI data indicate that a range of factors associate with the pattern of the acquisition of spoken language skills after cochlear implantation. Earlier exposure to sound via the CI associated with a faster rate of spoken language growth. Phonological, semantic, grammatical, and pragmatic development differed with age of implantation. Such results support models of language learning that predict that with earlier onset of access to acoustic–phonemic inputs, growth rates in spoken language can approach those of normal-hearing children, whereas delayed access associates with slower growth rates, particularly within language domains of syntax and pragmatics. Multivariable analyses suggest that language learning involves complex interactions in which modifying factors vary in their impact on language learning with age of onset of effective hearing, and the impact of biological and experiential factors varies with the age at which perceptual capabilities are introduced via cochlear implantation. A wealth of data indicates that neurodevelopmental phenomena related to language learning are driven by time-sensitive, bidirectional events. If environmental cues and interaction are not provided in a timely manner, developmental potential narrows. Conversely, Bates et al. (2003) have observed that brain maturation affects experience, and experience returns the “favor” by altering brain structure. In periods of exponential bursts that are characteristic of early language learning, there are compelling data that underscore the role of mutually beneficial, bidirectional interactions between brain and behavior.

Key advances will come from a fuller understanding of the specific neural events that drive language acquisition, and the genetic control that promotes learning from experience. For example, if we are able to make deductions about epigenetic controls of brain development through an understanding of how synaptogenesis and regression, synaptic refinement and cortical connectivity are influenced by the transmission, reception, and production of speech, we can inform approaches to rehabilitation of the child with early-onset SNHL to promote the remarkable achievement represented by spoken language development in the typical child.