Keywords

1 Introduction

Nowadays, there is a growing advocacy for the rights of people with disabilities (Toribio Camuñas & Jiménez Hurtado, 2022), especially following the United Nations Convention on the Rights of Persons with Disabilities (2007), and a defense of the right of people with disabilities to access leisure activities. Translation modalities such as audio description or subtitling for the d/Deaf and hard of hearing (SDH) are tools that enable people with disabilities to access audiovisual products found on video-on-demand (VOD) platforms.

Nevertheless, as noted by Martínez-Martínez et al., (2019: 411), despite the increasing availability of accessible products, the quality of these products tends to be deficient. This is especially true in terms of the translation of semiotic codes different from verbal code in SDH, i.e., sound effects and music. These fundamental components of the film soundtrack (Bordwell & Thompson, 2010: 298) are often inconsistently and arbitrarily translated, possibly due to professionals’ lack of understanding of the characteristics and conventions of semiotic codes related to music and sound, or due to the absence of normative standards in SDH establishing patterns for translating these elements, thus requiring subtitlers to rely on their intuition.

This study represents an approach to the intersemiotic translation of sound effects and music in SDH of audiovisual products on VOD platforms in three different languages (English, German, and Spanish), analyzing the translation strategies used and the presence (or absence) of consistency in the translation of sound effects.

The proposed objectives are, firstly, to review the national guidelines of Germany, the USA, and Spain, as well as Netflix’s SDH guides in English (EN), German (DE), and Spanish (ES), concerning the intersemiotic translation of sound, to ascertain their adequacy in achieving quality subtitling; secondly, to identify the most recurrent intersemiotic translation strategies in a corpus of Netflix series episodes in English, German, and Spanish and assess their relevance; and lastly, to determine if the intersemiotic translation of sound effects and music in the corpus episodes is consistent.

To accomplish this, we will start by describing the film soundtrack, the source text of the products analysed in this chapter, and the specific characteristics of the semiotic codes of sound and music. Subsequently, we will analyze the norms and style guides for SDH, particularly concerning intersemiotic translation, and their relevance in crafting quality subtitles. We will then describe the methodology used in the study to analyze Netflix’s SDH and evaluate the results obtained through examples.

2 Film Soundtrack: Beyond the Dialogues

Understanding the nature of the source text, that is, the film soundtrack, is crucial for being aware of the relevance of intersemiotic translation strategies employed in SDH for music and sound effects. In this section, we will define the elements of the film soundtrack, their origins, and the characteristics of sound and music.

While cinema initially began with silent films, with sound limited to live music or commentary from a narrator (Martínez-Martínez, 2015: 33), it is now inconceivable to envision an audiovisual product without its corresponding film soundtrack. This auditory element, as highlighted by Bordwell and Thompson (2010: 298), encompasses not just the verbal code of dialogues but also incorporates sound effects and music, which operate in harmony and not in subservience to one another. Just as the selection of intralinguistic translation strategies in SDH can be analyzed through the study of verbal code and linguistics, an understanding of the sonic and musical codes governing sound effects and music, respectively, is necessary to appropriately use the most suitable strategy.

Sound possesses four primary characteristics that define any effect: pitch, intensity, timbre, and duration (Jiménez Hurtado & Martínez-Martínez, 2017: 246). Pitch refers to the speed of vibration of a sound, its quality being either high or low; intensity pertains to the amount of energy propagated by the soundwave and its proximity or distance from the sound source (loud, soft, distant); timbre denotes the distinction of sounds concerning their nature (telephone, guitar, voice), including the synesthetic attributes attributed to them (velvety, metallic..); and finally, duration indicates the length of the sound effect over time (long, short, intermittent).

Music also shares these four attributes, although its nature is intentional and organized compared to a sound effect, which tends to be natural and disorganized (Reybrouck et al., 2019).

Bearing in mind these characteristics, Martínez-Martínez (2015: 291) identifies three predominant strategies for describing sound and music in SDH: categorization, attribution, and explanation. Categorization involves identifying the sound source based on its nature (door, bell, music); attribution entails assigning a quality to the sound source (creaking door, high-pitched bell, pop music); and finally, explanation refers to the use of additional information or another attribution complementing a complex sound or describing music (distant creaking door, intermittent high-pitched bell, pop music by Britney Spears). The correlation between the sound’s nature and the employed strategy was analyzed in a DVD movie SDH corpus within Martínez-Martínez’s doctoral thesis (2015). However, the rise of VOD platforms has created a revolution for this translation modality, which makes this study a continuation thereof.

Concluding this section, it is crucial to acknowledge the significance of considering sound and its characteristics within cinematic language to select the most appropriate intersemiotic translation strategy.

3 Standards and Style Guides: Not Very Instructive Instructions

To better understand the decision for choosing a translation strategy over another, it is essential to study the regulations governing SDH in each country and language analyzed in our corpus, as well as the style guides of platforms like Netflix, where these subtitles are hosted.

While this analysis began in earlier studies (Martínez-Martínez, 2021), this study is innovative due to its focus on intersemiotic translation and the incorporation of an analysis of Netflix guidelines.

3.1 The USA

The USA was among the first countries to back the making of all television content accessible. In 1972, the program ‘The French Chef’ was first aired with closed captioning (Global Captioning Leader, 2011). This set up a precedent for the Americans with Disabilities Act of 1990, which mandated all public and private networks to provide their content with SDH.

The guide governing SDH in this country is the Captioning Key of the Described and Captioned Media Program (2024). Concerning sounds and music, this chapter comprises several technical instructions, such as position, format (bracketed and italicized if produced off-screen), yet the most significant aspects revolve around the prescribed translation strategies.

It emphasizes the importance of identifying the sound source and attributing it to a character, unless explicitly shown on-screen, while using specific terminology (‘robin sings’ instead of ‘bird sings’) and employing onomatopoeias to reinforce the sound. While onomatopoeias, as Pereira Rodríguez (2010: 93) suggests, may benefit post-lingually deaf individuals, the lack of established onomatopoeias for all potential sounds in a film soundtrack can impede SDH comprehension. Onomatopoeias like ‘thrack’ for a hit are somewhat arbitrary, as they might not be as recognizable as ‘woof’ for a dog, potentially causing added difficulty in sound comprehension.

As for sound duration, an instruction that can help the user to understand this effect is segmenting subtitles or using punctuation to indicate speed or rhythm of a sound, alongside employing the present participle (‘barking’) for continuous sounds and the third person singular (‘barks’) for abrupt or intermittent sounds.

Regarding music, the standard states that background music should be subtitled only if essential to the plot, explicitly stating the title, composer, or artist, accompanied by an adjective describing the conveyed emotions or mood. Interestingly, it advocates for using objective adjectives, avoiding value judgments like (‘beautiful music’) or (‘melodic music’), which could influence the viewer’s experience objectivity.

For songs, the Captioning Key recommends objectively reflecting their mood, indicating the song title and singer, and presenting the complete lyrics. However, the inclusion of title and artist might only be useful if they reflect the functional use in the film or if the artist is well-known, allowing the user to form a mental image of the piece’s mood and the values associated with the artist’s image. Otherwise, for obscure titles or unknown artists, these subtitles might be opaque or even puzzling for a deaf user.

Next, we analyze Netflix’s English guidelines (2022a), which share similarities with those previously examined. For instance, in songs, it advises specifying the title of an instrumental piece only if widely recognized, which resolves some earlier ambiguity. However, to indicate ambient music, it recommends using a ‘generic ID’, which, by itself, is generic. Nevertheless, there is another instruction that mentions the need to present the genre or mood of instrumental pieces, which is a relevant strategy for better comprehension. Additionally, the guide underscores the importance of subtitling silence when pertinent to the plot, something that is as crucial as the sound itself, as indicated by Zdenek (2015).

Finally, regarding sounds, the Netflix guide emphasizes the importance of detailed and descriptive subtitles in intersemiotic translation. It focuses on the importance of describing voices, speech speed, or sound volume, referencing three of the sound qualities previously discussed: timbre, duration, and intensity, respectively.

3.2 Spain

Spain was also a pioneer in SDH, subtitling televised content for the first time in 1993 (Cuéllar Lázaro, 2016). Since then, the country has seen regulation of this translation modality twice through the UNE 153010 standards in (2003) and (2012), as well as laws emphasizing the need to safeguard the rights of people with hearing disabilities, such as Law 7/2010.

Focusing on the current guiding standard for SDH, UNE 153010: 2012, it also has two sections providing instructions for the intersemiotic translation of sounds and music. Regarding sound effects, the norm is limited, merely specifying, along with format, that the description of effects should be nominalized, avoiding action verbs in favor of nouns describing the sound source and prohibiting descriptions of the sensation produced by receiving the sound (‘birds are heard’) in favor of the emission source (‘birds’).

However, instructions for describing music are clearer and more pertinent to professionals’ needs. The guide stipulates that each musical piece should be described by its genre, the conveyed emotion, or the identification of the piece. Yet, these instructions are not entirely clarifying or objective, because, apart from the identification issue analyzed before, the emotion conveyed can vary based on each subtitler’s personal background, potentially contradicting the original author’s intent.

Netflix’s guide (2023) in Spanish provides some of the most detailed strategies for music description, the most specific among those analyzed in this study. It emphasizes subtitling diegetic music, something that is pertinent as diegetic sounds are part of the film’s depicted universe. While identifying the piece is considered a useful strategy, the guide prefers describing the genre or mood over piece identification, given that potential unrecognized pieces can be difficult to understand only through the title and artist. It also recommends a structure for music description: music + genre + description, something that proves to be very useful for establishing a controlled language enabling more orderly and homogeneous knowledge access for users.

Regarding sound effects, the instructions are also pertinent, emphasizing concise and clear descriptions, using simple present tense if emitted by characters and nominalizing if diegetic effects. However, each subtitler’s judgment prevails, as the instructions states (‘please, use your best judgment in these cases’).

3.3 Germany

SDH in Germany appeared earlier than in Spain, in 1980, on the ARD network (Cuéllar Lázaro, 2016: 149). However, its use was not regulated until 2014, when, under pressure from German associations for people with hearing disabilities, Germany included in its German cinema promotion law (Filmförderungsgesetz) the need for every produced film to have an audio-described version and one with SDH.

However, unlike Spain’s UNE 153010: 2012 standard, Germany lacks a similar standard, although there are indications such as Gemeinsame Untertitelrichtlinien für den deutschen Sprachraum (Méan, 2013), formulated by associations of people with hearing disabilities from Germany, Switzerland, and Austria, establishing a set of minimum quality standards. For intersemiotic translation, the format (white on black in parentheses) for sounds and music is highlighted, preferring onomatopoeias (like ‘wuff’ for a dog bark), and paralinguistic elements if unclear, as well as indicating original song lyrics within hashtags, providing German translations when possible.

As for Netflix’s own German guide (2022b), it mirrors the one analyzed for the USA, offering clearer descriptions of sounds based on timbre, duration, and intensity, yet as will be observed in the results section, it might not be sufficient.

Concluding this theoretical section, it is evident that the imprecision in providing instructions for intersemiotic translation may lead to inconsistency and heterogeneous strategies used in our corpus.

4 Methodology

In order to understand the most commonly used intersemiotic strategies in each of the studied languages, it is necessary to analyze a representative corpus of series from VOD platforms that can provide conclusive results.

For this reason, a corpus has been compiled comprising the SDH of several episodes from three similarly themed series (young adult crime) from one of the most representative VOD platforms, Netflix (Durrani, 2023), in German, English, and Spanish: Kitz (Reinbold et al., 2021), How to Get Away with Murder (Nowalk et al., 2014), and Élite (Salazar et al., 2018). To ensure that the sample is representative, four episodes were selected from each series. Subtitles resulting from intersemiotic translation and those describing the paralanguage of character interventions were the only chosen, given the nature of the study.

For the analysis, the labeling system developed by Martínez-Martínez (2015) was employed. It is a quantitative-approach labeling system for analyzing SDH based on two levels: sound origin and translation strategy used. On the second level, we will exclusively focus on the three major strategies described by the author for intersemiotic translation of sound effects and music: categorization, attribution, and explanation, strategies already defined in the previous section. Subtitles were labeled manually using the qualitative approach software, MAXQDA, and results are presented in the next section.

5 Analysis and Results

We now present the results of the predominant intersemiotic strategies in the SDH on the three languages. We first present the common findings in the corpus to establish the most frequently utilized strategies, followed by a discussion on the differences observed between languages (Fig. 1).

Fig. 1
A bar graph of categorization, attribution, and explanation for D E, E S, and E N. The highest bar is for categorization in D E with a value of 473.

Frequency of categorization, attribution, and explanation in DE, ES, and EN considering character identification. ALT: Table that describes the frequency of use in German SDH of categorization (473 subtitles), attribution (164 subtitles), and explanation (376 subtitles); frequency of use in Spanish SDH of categorization (475 subtitles), attribution (123 subtitles), and explanation (141 subtitles); frequency of use in English SDH of categorization (321 subtitles), attribution (100 subtitles), and explanation (138 subtitles); all of them if character identification is taken into consideration

The figure indicates that the most used technique in all three languages is categorization (55%), followed by explanation (28%), and finally attribution (17%). This may be attributed to the frequent occurrence of character identification (‘Wes’, ‘Roger’, ‘mutter’ [mother]) in our corpus, which has appeared consistently. Therefore, in the subsequent figure, we have opted to exclude these subtitles to prevent this label from skewing the data.

We observe that in Fig. 2, the results vary across the three languages in the corpus. In DE, explanation stands out (47%), followed by categorization (37%) and attribution (16%). In ES, on the other hand, categorization prevails (64%), followed by explanation (19%) and attribution (17%). In EN, a similar pattern to Spanish is observed, although with different frequencies of use: categorization prevails significantly (55%), followed by explanation (28%), and lastly, attribution (17%).

Fig. 2
A bar graph of categorization, attribution, and explanation in D E, E S, and E N. The highest bar is for explanation in D E with a value of 376.

Frequency of categorization, attribution, and explanation in DE, ES, and EN not considering character identification. ALT: Table that describes the frequency of use in German SDH of categorization (218 subtitles), attribution (164 subtitles), and explanation (376 subtitles); frequency of use in Spanish SDH of categorization (205 subtitles), attribution (123 subtitles), and explanation (141 subtitles); frequency of use in English SDH of categorization (143 subtitles), attribution (100 subtitles), and explanation (138 subtitles); all of them if character identification is not taken into consideration

To better understand these results, each of the three predominant strategies in intersemiotic translation will be thoroughly examined, considering the origins of the sounds and their consistency. We will select the most representative examples. It should be noted that there are sound sources without representation in our corpus, possibly because the series’ plot did not require it or the subtitler chose to omit its translation.

5.1 Categorization

Below is the cross-reference of labels between the translation technique categorization and the sources of the original sound text, following Martínez-Martínez’s (2015) classification: ‘nature’, ‘animal’, ‘human’, ‘object’, and ‘music’ (Fig. 3).

Fig. 3
A bar graph of D E, E S, and E N in nature, anima, human, object, and music. The highest bar is for D E in human with a value of 196.

Frequency of categorization in the corpus. ALT: Table that describes the frequency of use in SDH of categorization based on sound origin: nature (0 subtitles in German, 6 subtitles in Spanish, and 0 subtitles in English); animal (9 subtitles in German, 9 subtitles in Spanish, and 0 subtitles in English); human (196 subtitles in German, 147 subtitles in Spanish, and 136 subtitles in English); identification (255 subtitles in German, 270 subtitles in Spanish, and 178 subtitles in English); object (21 subtitles in German, 42 subtitles in Spanish, and 7 subtitles in English); and music (0 subtitles in German, 1 subtitle in Spanish, and 0 subtitles in English)

As observed, the most common sound sources, excluding character identification (53.91% in DE, 56.84% in ES, and 55.45% in EN), are ‘human’ (41.44% in DE, 30.95% in ES, and 42.37% in EN) and ‘object’ (4.44% in DE, 8.84% in ES, and 2.18% in EN). We will now analyze the sources ‘animal’, ‘human’, ‘object’, and ‘music’ to observe examples of inconsistency or typical structures in each language.

Despite the low number of subtitles found for sounds emitted by animals (1.9% in DE, 1.9% in ES, and 0% in EN), we would like to highlight an example found in the German series. It involves identifying the sound source + the sound produced (‘Vögel zwitschen’ [Birds chirp], ‘Kühe mühen’ [cows low], or ‘Kuh brüllt’ [cow bellows]). In the case of the cow examples, an inconsistency and terminological error can be observed with the verbs ‘mühen’ and ‘brüllen’. The former is common for both cow and bull, while ‘brüllen’ is exclusively used for the male animal.

In sounds produced by humans, some very recurring structures are noted. In DE, sounds are inferred using separable (‘schreit auf’ [shout]) and non-separable verbs (‘stöhnt’ [moans]). In ES, explicit mention is made of transitive and intransitive verbs (‘exhala’ [exhale]) and reflexive verbs (‘se mofa’ [laughing at]), which are nevertheless clearly visible on the screen. In EN, there are interpretations of transitive and intransitive verbs (‘chuckles’) and verbs in continuous present (‘grunting’). The case of the Spanish verb ‘asiente’ is noteworthy, as the visual component clearly shows the character performing the action, rendering subtitling it unnecessary, since it increases the cognitive load for the deaf person (Martínez-Martínez, 2022). It is important to note that the use of verbs is quite recurrent, with numerous examples, mostly related to alternating sounds.

In sounds from inanimate objects, linguistic codification often involves describing the type of sound produced by the object using a singular or plural noun (in German ‘Piepsen’ [beep]; in Spanish ‘pitidos’, and in English ‘beep’). It is also common to find nouns identifying the sound source (‘Gong’ [gong] or ‘coche’ [car]). Additionally, in the German series, action verbs like ‘piept’ (beeps) are presented.

Finally, in ‘music’, only one example was found in the Spanish subcorpus, ‘música’ [music], a subtitle that is too vague, lacking relevant information for the users and not following the previously reviewed instructions, making it an ineffective subtitle for describing the plot.

5.2 Attribution

We will now analyze the attribution strategy (Fig. 4).

Fig. 4
An attribution bar graph of D E, E S, and E N in nature, anima, human, object, and music. The highest bar is for D E in human with a value of 119.

Frequency of attribution in the corpus. ALT: Table that describes the frequency of use in SDH of attribution based on sound origin: nature (5 subtitles in German, 0 subtitles in Spanish, and 0 subtitles in English); animal (7 subtitles in German, 0 subtitles in Spanish, and 5 subtitles in English); human (119 subtitles in German, 30 subtitles in Spanish, and 73 subtitles in English); object (33 subtitles in German, 49 subtitles in Spanish, and 22 subtitles in English); and music (0 subtitles in German, 44 subtitles in Spanish, and 0 subtitles in English)

In this label intersection, as in the categorization strategy, it is observed that the predominant sources of sound origin, reflected in a higher number of subtitles, are ‘human’ (72.56% in DE, 24.39% in ES, and 73% in EN) and ‘object’ (20.12% in DE, 39.84% in ES, and 22% in EN). Moreover, in ES, the use of attribution in ‘music’ is significant, in 35.77% of the attribution cases.

Let us start with the human source, which presents a variety in its attributions. The most frequent attribution consists of using adjectives that, on one hand, refer to mood or physical states altering voice modifiers. Examples include ‘vorwurfsvoll’ (full of complaints) in German and ‘irónico’ (ironic) in Spanish. On the other hand, attributions are related to sound intensity using adjectives in DE, EN, and ES (‘laut’ [loud], ‘lloroso’ [tearful], ‘seufzt leise’ [sighs softly], ‘respira fuerte’ [breaths deeply], ‘beathing deeply’, or ‘breathing heavily’). The last two examples in EN draw attention since ‘breaths deeply’ means the same as ‘breathing heavily’. However, it seems that the emphasis in the former is on the depth of breathing, while in the latter, it focuses on the act of breathing itself. With less frequency, there are adjectives that add a quality to the original sound in EN with an adjective + noun structure (‘indistinct conversations’) and in ES with a noun + adjective structure (‘conversaciones inaudibles’).

Finally, in EN, there is a notable use of verbs ending in -ing following different pattern structures such as adjective + verb in -ing (‘indistinct talking’), noun + verb in -ing (‘voice breaking’), or verbs in -ing + adjective (‘breathing heavily’).

In object, a similar pattern emerges. In most cases, nouns representing the sound source are paired with adjectives indicating various sound qualities. In EN or ES, it is common to find nouns + verbs ending in -ing (in EN) or gerund (in ES): ‘telephone ringing’ or ‘agua salpicando’ (water splashing), respectively. In ES, there is a wide recurrence of structures such as noun + adjectives that refer to the characteristics of the sound source (‘disparos electrónicos’ [electronic shots]), nouns + participles not directly related to the sound, or at least not directly (‘llamada desconectada’ [disconnected call]), or noun + prepositional complements (‘vibración de móvil’ [mobile phone vibration]). In DE, there is a preference for using adjectives referring to sound intensity + nouns (‘lauter Knall’ [loud noise]) and the type of sound production (‘skurrile Klänge’ [funny sound]).

Lastly, in ‘music’, we have only found subtitles in the Spanish corpus. These subtitles identify the film genre associated with the music (‘música de intriga’ or ‘música de suspense’), interpret the musical sound that sometimes embodies functions of cinematic music theory (‘música triste’ or ‘música emotiva’), or interpret by referring to the proximity or distance of the sound source (‘música distante’).

5.3 Explanation

We will now present the explanation category (Fig. 5).

Fig. 5
A bar graph of D E, E S, and E N in nature, anima, human, object, and music. The highest bar is for D E in music with a value of 165.

Frequency of explanation in the corpus. ALT: Table that describes the frequency of use in SDH of explanation based on sound origin: nature (5 subtitles in German, 0 subtitles in Spanish, and 1 subtitles in English); animal (8 subtitles in German, 4 subtitles in Spanish, and 2 subtitles in English); human (142 subtitles in German, 75 subtitles in Spanish, and 62 subtitles in English); object (56 subtitles in German, 12 subtitles in Spanish, and 70 subtitles in English); and music (165 subtitles in German, 50 subtitles in Spanish, and 3 subtitles in English)

The prevalent sources for explanation are human sounds (37.77% in DE, 53.19% in ES, and 44.93% in EN), objects (14.89% in DE, 8.31% in ES, and 50.72% in EN), and ‘music’ (43.88% in DE, 35.46% in ES, and 2.17% in EN).

We will begin with human sounds, where we have found only two patterns to explain a sound produced in the audiovisual content. First, it locates the sound (‘Kind schreit im Hintergrund’ [Child is screaming in the background]) or ‘indistinct shouting in distance’. The other pattern is focusing on the duration or temporal elements of the sound (‘summt weiter’ [continues vibrating]).

In objects, there is a clear parallelism with human sounds. The origin of the sound is located (‘bandejas por el suelo’ [trays on the floor]) or the duration or some temporal elements of the sound are explained (‘Handy summt mehrmals’ [mobile phone vibrates several times] or ‘cellphone continues ringing’). Lastly, they explain the concurrence of sounds, for example, ‘Motor springt an und Wagen fährt weg’ (Engine starts, and the car drives away). This sound, moreover, is redundant (Martínez-Martínez, 2022), as the user visually perceives the car moving.

In ‘music’, subtitlers often identify the sound intensity (‘abrupter Übergang’, ‘Musik wird leiser’ [abrupt transition, music fades]), or identify the song title and its performer, as indicated in the Netflix style guide (‘“Home” von Somer’, ‘Naughty Boy’s “No one’s here to sleep” plays’, or suena ‘“Fuel to Fire” by Agnes Obel’).

6 Conclusion

The analysis presented in this study represents an initial description of current practices in the intersemiotic translation of music and sound effects in DE, ES, and EN. The prevalent use of categorization and explanation aligns with the instructions stated in Netflix style guides, although some instances have shown these techniques to be insufficient in describing film sound.

Additionally, some predominant syntactic structures have been identified in each language for describing music and sound effects, focusing on the sources of the sound. However, these structures are still inconsistent across languages and even among episodes within the same corpus. These identified structures represent a step towards establishing or creating a controlled language that could enable easier access to understanding film soundtracks for individuals with hearing impairments. Nevertheless, this inconsistency hampers knowledge accessibility and signifies a lack of understanding of the semiotics of the original text and its structural dimension.

The limitations present in this study are the lack of the analysis of other guidelines such as those by Disney, HBO, or Amazon and the fact that these guidelines are always evolving toward an improvement in intersemiotic strategies; and the analysis of just a genre, young adult crime, which limits the reality of SDH strategies for sounds and music.

Future research directions include expanding the corpus into different languages or with a broader sample, studying sound functions to identify associated strategies, or establishing a controlled language to help subtitlers ensure greater consistency in creating intersemiotic subtitles. Despite limitations associated with sample size and the source of subtitles being from a single platform, Netflix, we are confident that this study represents a step forward in research on intersemiotic translation in SDH.