Professional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.
Text-to-text translation tasks have been assisted by software for the past decades. Computer-assisted translation (CAT) offers tools that allow translators to gain time and efficiency (Koehn 2009) by automating repetitive tasks and storing prior work in order to reuse it later. Not only do they provide tools to help the translation process (such as glossaries or dictionaries), but they now provide an entire integrated working environment in which translators can find everything they need to work. Such software can support various written languages, and are today the heart of the translation process. However, none of them can help yet when it comes to Sign Language (SL) translation and this despite the actual growing needs for translated content. In 2008, the United Nations voted the Convention on the Rights of Persons with Disabilities (CRPD) (UN Assembly 2008).
It puts the accent on the right for everyone to access full information and communication. In France, despite the creation of a master degree in French/French Sign Language translation and mediation in 2011, the number of professional translators is very low. Of the 144 SL interpreters identified by the researchers, less than a dozen are professional SL translators and only three are registered as an AFILS member (AFILS is the French SL interpreters and translators association). And contrary to their text-to-text counterparts, those SL translators cannot benefit yet from CAT tools. Indeed, none of the current software available on the marketFootnote 1 seems to be supporting SL as a working language. They do not offer the opportunity to work on and with videos, and do not include resources in any SL.
Sign Language interpretation and Sign Language translation are not to be confused. The classical way to distinguish the two is to say interpretation concerns oral languages while translation concerns written texts. But the major difference is, and this is even more true for SL as they do not have an editable written form, the possibility to post-edit the translation. Indeed, translators often work with deadlines. Knowing their projects are due for a specific date, they can rework their translations as many times as they want to, until they reach what they believe to be the best version of it in due time. Interpreters however works live. They translate on the spot, orally, and therefore can only provide a unique version of their translation. Once it is delivered, it cannot be edited (Filhol and Tannier 2014). Before asking ourselves how to adapt present CAT software to Sign Language translation, the first step is to identify the key points of current CAT tools and see what they really bring to the professional practices. The second step will be to see which tasks are involved in the text-to-sign translation process. Next, we can think about the way in which those tasks can be assisted and/or automated, by discussing with involved professionals and reviewing the needs we collected from them.
To do so, we worked with two groups of professional Sign Language translators. We aimed for both objective and subjective data because people can struggle to verbalize their activities in detail, and tend to omit things they actually do. On the contrary, when asked, they can express things that we do not observe in their everyday practice. The two groups are disjoint, meaning that each participant only took part in one of the experiments. The first experiment consists in collecting objective data by assigning a translation task to the translators, which they execute without further interaction with us. The second experiment consists in a brainstorming session to collect subjective data by collecting participants’ insights: they reflect on needs, practices and problems without being assigned any task. The aim of the two studies we present is to identify the steps of the translation process, see if they are systematic or not, and also if they are performed in a certain order.
State of the art
Sign Languages are natural languages using the visual-manual modality to convey meaning through manual articulations, body gestures, facial expres-sions, etc. Where spoken languages are linearly constrained (only one sound can be produced at a time), SLs express multiple information in a single mo-ment. They also make heavy use of persistent spatial references which can be reused when needed throughout the discourse. The discourse organisation in SL consists in firstly setting the scene by introducing time, places and characters, and secondly telling about the event taking place. Once the scene is set, or a character or place has been introduced, it can later be referenced by using only the associated spatial references. SLs are also iconic languages, meaning that the signs are inspired by reality (called depicting shapes, Cuxac and Sallandre 2007). To some extent, they are sometimes constrained by physical reality, especially when it comes to locations. Indeed, the SL content must be highly visual and accurate when referring to spatial relationships. When signing about two cities and their geographical relation, the signer must as much as possible stick to reality, meaning that the cities must be situated in signer space in a way that accurately reflects their physical orientation and distance in the real world.
Some systems exist to describe SL in graphic forms (such a SignWriting, or HamNoSys), but they are not used widely enough by the deaf community to be considered an editable written form of Sls. Those systems mostly serve the research community, or the education system, but are not spontaneously used by deaf people to communicate in a written communication situation. Therefore, we can say that SL do not have a formal and editable written form as there is for English or French. Videos are currently the most common way to record SL, but they are neither easily searchable nor shareable compared to text or written information.
Recent studies are interested in SL translation, from the automation point of view. They focus on capturing SL or generating signs. They mainly focus on sign recognition. Bukhari (2015) proposed a sign-to-speech translation system for Android phones, based on the use of one sensory glove. Apart from being an invasive and constraining system to use, it only focuses on the manual articulators from one hand without taking facial expressions into account. Also, the system has been trained with only 20 signs, thus focusing on lexical items. Sensory gloves system work best with finger spelling recognition, but cannot be identified as translation systems because SL comprises far more than spelling. Text-to-sign translation studies are often paired with avatar technologies issues, as in Halawani’s Arabic SL Translation system for mobile study (Halawani 2008). It combines a text input, which corresponding signs are then animated by an avatar, based on a database of motion captured signs. Even if the avatar technology is getting better, such text-to-sign translation systems still focus on lexical elements, with a strict grammar and little iconic output. SL resources being rare resources, Barberis et al. (2010) and Bertoldi et al. (2010) both proposed works based on Italian Sign Language. The first one is about machine translation (MT) for Italian Sign Language, from text to animated avatar. They mention statistical translators such as MOSES which they trained for SL. The second work is about the creation of a parallel corpus between Italian and Italian Sign Language, within the ATLAS project. This corpus is meant to train a virtual interpreter rendered as an animated signer. In parallel, researchers at DePaul University have been working on the issue of accessibility for deaf people, and the development of a signing avatar for the past 15 years. They also addressed machine translation from text to sign (Wolfe et al. 2016).
The next section discusses the features of text-to-text CAT applications.
Computer-assisted translation (CAT)
CAT relies on three fundamental features. The first one is the exclusive use of text as a way to convey and treat information, hence the need for a written form. Intended for text-to-text translation, the input data to translate is obviously written. Whether the translator does so ab nihilo or using the tools that the software provides, the input is text. Likewise, the tools provide assistance only in written form: glossaries, terminology assistants, and the translation memory (TM) give access to written-only content. Each action thus implies keyboard editing: writing translation, modifications, or searching with the tools.
The second key point is what we have called “the principle of linearity”. When a source text is uploaded to the software to be translated, it is automatically divided into in smaller parts, which we call segments. Those segments are usually determined using the sentence pattern: the software isolates the layout segments (titles, subtitles, captions), and cuts either when it encounters a full stop, a line break or a new bullet. This function can be parameterized, but will not allow much more freedom. With every source segment comes a corresponding empty target segment, which the translator has to fill in with its translation. The order of those segments cannot be changed. The translator can merge segments, but cannot take the first one and insert it later in its production. Thus the principle of linearity: it is assumed that, for text-to-text translation, the concatenation of the translated segments will result in the translation of the concatenated source segments, in the same order.
The third one is the possibility to capitalize on prior work by the mean of the translation memory tool (TM). It is the real innovation that CAT brought to translation practices. First commercialized in the 90’s, it is now a staple tool for every professional translator. It stores alignments (i.e. each source segment paired with the associated translated segment) progressively as the translator completes the empty target segments. If one of those stored segments is later encountered, the TM automatically suggests its previous translation. The operator is free to accept the suggestion, with or without modifications, or to reject it. The similarity percentage from which the TM suggests a match is 80% by default, but can be adjusted in the parameters as well. TM can easily be shared between colleagues, and can even be provided by the client himself. Such tool had a considerable impact on the translators’ practices, in a matter of productivity, cost reduction and consistency of the translation output (Lagoudaki 2006; O’Hagan 2009).
Sign language and CAT?
Yet, we did not find any mention of SL CAT software where the human user is involved. MT could be used in CAT, but we would like to focus on the entire translation environment rather on just the MT output. Our aim is to equip professional translators with suitable tools for their tasks: pertinent and accessible lexical and encyclopedic resources, video recording, video player, an adaptable way to treat the translation informations, and segmentation tools. Automatic translation for SL could have potential benefits, but at its current state, it does not bring enough help to the translators to be considered a tool to integrate in the prototype we are developing. It often requires post-editing, which is a waste of time, and the results may lack nuances in terms of meaning. The SignallFootnote 2 project is a great tool to assist direct communication between deaf and hearing people, but text-to-sign translation requires more detailed work on meaning. As we want the human operator to be in complete charge of the translation, translators will be our first source of data. The next paragraph describes the experiments we set up in order to learn more about their professional practices.
First experiment: observing translators at work
The first part of this work concerns objective data. To analyse translators’ actions, we filmed them in their work in their working place, from reading text for the first time to the delivery of a translated result.
The translation exercise was performed at the translators’ office. In pairs, each composed a professional Sign Language translator (deaf) and a professional Sign Language interpreter (hearing). They were both working on the same text together. This is the usual company’s choice, though the participants were not yet used to it. The benefit for us is that the problems they would encounter would be discussed, and issues verbalized, which allows us to annotate the process in tasks. By task, we mean a specific action taken in the translation process. One pair was considered to be experts, and the other one was considered a beginner pair. We organized the filming in sessions, each time submitting a selection of journalistic texts to translate. The first session was a break-in session with three short three-line news items, dealt with one by one, so that the participants would get used to the set-up. The second session assigned twenty items of the same kind, and allowed the participants to organize their work freely. Finally, the third and last session proposed three long texts (about half a page). The recordings took place over 5 days, and we collected a total of 16 h of footage, with one camera view on each translator’s table and computer. When they were ready to deliver a translation, we filmed the result with the same cameras. They could split the performance in several takes, but no effort was put later in editing since the final result was not our focus.
Below is an example of the news they had to translate, in French, then in English:
Le chef des soldats rebelles du Timor oriental, Alfredo Reinado, a opposé une fin de non recevoir mercredi aux mesures présidentielles destinées à rétablir le calme dans le pays, rendant ainsi peu probable un dénouement rapide d'une crise qui perdure depuis plusieurs semaines.
The leader of East Timor's rebel soldiers, Alfredo Reinado, rejected on Wednesday presidential measures aimed at restoring calm in the country, making it unlikely that there will be a quick end to a crisis that has been going on for several weeks (Fig. 1).
We labelled the tasks on the video time line. Once extracted, the result is a list of tasks identified with their time tags for each translation. Next we added up the durations spent for each task, then for each translation, and lastly in total. We then compared the annotations between pairs but also between texts, to detect repeated and shared practices, and a total or partial order in the translation process. At the time of writing, three short texts for both groups (equivalent to 3 h of translation) were annotated. We also annotated one longer text from the expert group, to compare the tasks regarding the length and the complexity of the source text.
Results for short texts
Here are the tasks we identified, with a brief description:
Lexical search: the translators solicit various resources in order to find the adequate sign for concepts, including place names or proper names.
Discussing signs: for concepts with slight meaning differences influenced by context, the translators often discuss which signs are best suited for the situation.
Map search: when it is required to depict relative geographical locations or to sign a place when no specific sign is known or found, translators search for maps and plans.
Definition look-up: searching for definitions of source words or concepts which are not clear to the translator’s mind. It also may help to find a way of signing it if no sign is known or found.
Encyclopaedic look-up for context: when the source text refers to previous events or links between people that are unknown to the translators, they collect background about it.
Picture search: to identify protagonists cited in the source text or to find a suitable periphrasis. Also to describe things that needs to be, or to assign a pertinent temporary sign to someone or something for the duration of the translation.
Segmenting the text and ordering segments: this includes dividing the source text in smaller units, arranging them in an appropriate order for Sign Language, one translator’s rehearsing the result and discussing with the other to rework the discourse organisation.
Memorizing: the translator signs for himself, without interacting or even looking at his colleague. It is a very personal and informal production of the intended result, repeated as many times as needed to be completely memorised.
A video depicting each of those tasks involved in the translation process can be found at the following address: http://sltat.cs.depaul.edu/2019/kaczmarek.mp4.
These tasks can be completed in a different order, with different resources and during various periods of time depending on the text or the pair. But most importantly, they remain the same. Below is a table that summarizes the results. On the far left column are the translation identification codes, where B stands for Beginner and E stands for Experts. The header lists all the previously-identified tasks. A green square means that this task has been identified during this translation; a grey square means it has not. The second column shows the total duration of each translation. This table concerns only the first three short texts for each pair (Fig. 2).
We can notice that the task of segmenting and ordering (item 7) is observed in every one of the six analysed translations. This shows that translators always spend time to work out an order for their translation. The delivered signed results also show that this order is always different to that of the source texts. Besides being the only systematic task out of the eight we identified, it is also the most time consuming one. With a total time spent of 66 min, segmenting and ordering represents almost a third of the total time spent translating. This is evidence that the principle of linearity is neither assumed when translating nor verified in the results concerning SL translation. This is a major challenge in adapting the typical CAT interface to Sign Languages.
Results for long texts
The experimentation also included two longer texts for each group. These are still journalistic news, about half a page long. They contain more complex news including many characters and their relationships as well as references to past events. The methodology was exactly the same. An example of longer texts they were asked to translate can be found in annex.
The analysis of the videos identified some tasks that were also seen in the shorter texts. These include:
Segmenting and ordering the text.
However, the translation process of the longer texts differed from the translations of the shorter text in several important ways. These are described in the list that follows:
Text marking: the translators proceed to highlight or underline in the text, either for later further research or to keep it in mind as a key point of the message.
Text comprehension: because the source text is longer and more complex, they take time to read it carefully, signing to themselves or each other to make sure they understood the same message.
Discuss meaning: when the message can be ambiguous, they discuss the slight nuances of meaning. This task is often paired or interrupted with encyclopaedic look-up episodes.
Discuss translation: they discuss either the choices of translation (for example, should they spell the entire names or try to give a sign to refer to the person), or the formulation used by the signer. The point is to find the best formulations to make the information clear and the most accurate as possible.
Translation: this is kind of a rehearsal part. One translator films himself, while the other watches. This serves as a base for discussing translation, but is also a way of memorizing the translation. The videos created during this process can eventually serve as a final product.
Figure 3 is a table showing the number of occurrences for each task during the translation of one longer text, as well as the total time spent doing so:
We can observe that segmenting and ordering tasks are the most time consuming ones in the process, as it was with shorter texts. The second most time consuming task in our example is filming translations, but the amount of time spent is not representative because it depends on the source text and on the number of tries made to reach the final version. Here again, the tasks are not performed in a strictly linear order, but with an observable set of precedences. First, the translators work on the source text: they process it to understand its meaning and structure. They also take time to discuss the meaning, make sure they both clearly understand it in the same way. Then come the search-related tasks, and the segmenting. Once the segmenting is done, comes a first ordering part and a first try at translation. After this initial set of initialisation processes, comes a sequence of iterations between ordering and discussing translation tasks, interrupted by episodes consisting of discussions related to the meaning of portions of the text. As the translation progresses, the tasks become more focused on filming and discussing the translated portions. As observed, the filming translation task also serves as a memorizing task. For the first filmed translations tests, the translator still relies on his written notes. However, the final translations are produced entirely from memory.
Observing those six translations, we could not deduce a complete ordering of the tasks. It depends on the source text itself, and the translators’ way of working. However, we did observe a partial order, or at least some kind of precedence in the translation process. The searching tasks (lexical, map, encyclopedic, definitions) tend to occur as the translators discover the text. The first step for the translators is understanding what they have to translate, on a general level first, then refining to the smallest nuances of meaning to suit best the source language. Once the meaning is clear, they work on the segmentation and the order of the information. This step is repeated until both translators agree on the final version of the translation to produce. The last step consist in memorizing the translation, sometimes rehearsing with one another, and lastly of course, filming the final translation.
We can also observe that not every task is systematic, but are consistent between the pairs. For example, for the first translation, both pairs looked up maps, and pictures. Both pairs also looked up a map during the third translation.
Due to the difference of experience between the two pairs, we will make only a few observations. A small reminder: both pairs worked on the same texts. Here is what we can outline from this experiment:
Experience seems to have a influence on the tasks. We can observe that the beginner pair is the only one to search for lexical and encyclopedic knowledge, where the expert pair is the only one to work on memorisation.
The time spent for each translation is the other major difference between the two pairs (from 1h13 for the first one (B) down to 19 min for the expert pair, 59 min (B) down to 14 min (E) for the second translation).
For longer texts, we observed more steps in the process. These steps were absent from sessions involving shorter texts. When translating short news, we did not observe the “translation” task. It is mostly a succession of utility tasks (searching, ordering, discuss), then the spontaneous production of the translation. With longer text, we do observe that the “translation” task is the second most time-consuming task behind segmentation and ordering. The translators do try different versions, different ways of translating. They even discuss translation choices, which we did not observe when the pairs translated shorter texts. Based on those observations, we can draw a first methodology of the sign language translation process, but also a first list of functionalities that CAT software for SL should include. The longer texts data are not sufficient to try to make any generalisation, but are a good start to think about an interface for assisting SL translation. A segmentation tool seems to be necessary, as well as an easy access to resources. The fact that translation is the task that occurs the most often, added to the observation of multiple tries, could point to the need of a flexible interface that could allow to move parts, or reuse them without starting again from scratch.
The next part relates the expressed needs and practices by the professionals.
Second experiment: brainstorming with sign language interpreters
The second part of the work involved a brainstorming session with Sign Language interpreters with a goal of gathering more qualitative data. We consulted them for two reasons. On one hand, there are very few professional Sign Language translators in France, and interpreters are often asked to act as translators. One the other hand, before performing planned major interpretations, they ask for written notes to prepare, and therefore process a source text as a translator would.
We organised a brainstorm session to verbalize common practices regarding translation as well as preparation, commonly encountered problems, current solutions, and concrete needs. The brainstorming session lasted 2 h and a half, between six interpreters who work for the same interpretation department, with a range in professional experience. The session was under the authors’ supervision, one taking notes and the other one moderating the discussions, asking open and general questions about their practices. The main questions were displayed on slides so that everyone could refer to them when needed. Participants were given sticky notes and pens to write down their ideas, to be shared and discuss later (Fig. 4).
First, they were asked to think about what kinds of digital tools could help, or ease the translator’s job. We reformulated this question, asking a second time what kind of tools did they think a computer or software could bring to them, or could add to their current practices. This question was asked right away, and they had to answer it without any further information about CAT. The ideas they wrote on paper were displayed on the board.
On a second time, the professionals were presented with a short introduction to the CAT concept. It included brief presentation of software functionalities and tools, and how it allows text-to-text translators to work faster and more efficiently. At the end of the presentation, we brought back the notes, and compared them to the functionalities we just presented. The second question was asked right after: “what are the specificities or the specific needs of sign language in terms of translation?”. The participants were free and encouraged to add some more ideas to the list of functionalities and tools established during the first part. Following the CAT introduction, they could now compare what already exists with what SL translation needs in terms of tools, but also reflect on how to materialize it.
Lastly, the brainstorming session ended with a free discussion of the topic, where everyone was able to bring more questions, express views and problems that did not come to mind earlier, and debate with each other about them. During the entire session, we took notes of the point mentioned by the participants, and kept the board of ideas for ourselves. The results are listed below.
The following list is based both on what the professionals wrote on their papers during the brainstorming and what was mentioned during the debate session or the free discussion times. We listed each element and provided a short description of the item to contextualise.
Pictures are used as a base reference to create signs when finding a periphrasis or a suitable description is necessary (for example in very specific fields, companies’ jargon, Harvey, 2002). They can also allow the better understanding of an unknown element that must be translated.
Dictionaries (in source language) are used for a better understanding of the source text, whether to perceive fine nuances of meaning or to look-up an unknown word.
NLP features are cited, such as named entities recognition, for places and personalities. They are also mentioned for dates, or numeral values in general. The idea of a tool that could gather percentage information and automatically turn it into a graphical representation has emerged as well.
Segmentation assistance was mentioned even before the short CAT presentation. They first suggest the idea of a tool that could help extract some parts of the source text and put them to emphasis, or could make it easier to “cut pieces of the text” to build themselves their own paragraphs. When presented with CAT software and the automatic segmentation in smaller units, they agreed on the utility of a similar tool for SL, but only if the smaller units could be freely reorganized.
“How to say” examples: this suggestion did not seem to refer to an existing tool at first. It was about some sort of data base which gathered SL examples of standard expressions or usual constructions. Afterwards, it sounded like some sort of concordancer suggestion.
SL resources: in a more general manner, the participants easily and immediately agreed on the need for more SL resources. Current SL resources are very rare, and when found, can be very time consuming to consult because of their poor searchability. According to our participants, they are difficult to find, badly referenced, difficult to share and access.
Drawing and filming possibilities are viewed as a way to take notes, or to think about the translation itself. Participants mentioned the use of drawings to organize a discourse, and filming as a way to make a draft of the translation.
Translation Memory was mentioned before the CAT presentation by one of the participant who was also a text-to-text translator in the past. Once the presentation was over, everyone agreed on the benefit they could take from such tool adapted to SL, even though they did not have suggestions about how to make it work with SL.
Definition Data Base in SL is roughly an equivalent of the dictionaries suggestion (item 2). The idea was to build SL dictionaries with more precise lexicons, sorting the content by topics.
Encyclopaedic resources: as explained and observed earlier, SL translation can be constrained by reality, hence the need for encyclopaedic knowledge. According to those participants, map search also fits in this category.
Video examples of translation: this category is close to the “how to say” examples mentioned below, except that this one only concerns translation examples, and not only SL examples. It is the expression of a need for a TM equivalent in SL (Fig. 5).
The suggestions made here tend to match a majority of already existing CAT tools, such as glossaries, dictionaries, and TM. The need for a segmentation tool is clearly expressed as well.
Discussion and limits
The results gathered from both experiments are relatively consistent, but the two sets of results exhibited some contrasts as well. The first one identified practices that were not referenced in the second one, such as map search, encyclopaedic search for context, rehearsing and memorizing the translation. On the other hand, the second one brought light on problems we did not clearly identify in the observations: the lack of shared Sign Language resources, the need to access prior work, whether done by others or by oneself. This particular last point shows interest in a Sign equivalent to translation memory, which text-to-text translators use quite often. Integrating such a feature in CAT software for Sign Language raises a new challenge, as without the principle of linearity, the text and Sign segments do not match and cannot be aligned automatically like they are in text-to-text translation. Alignment may need the translators’ direct involvement, which is a new constraint on the software functionality.
Both experiments show the importances of segmenting and re-ordering the source text. Evidences produced against the principle of linearity when it comes to SL also underline the irrelevance of automated segmentation for SL. The translator must remain free to segment as necessary, depending on the source text.
This set of data is too small to set a definitive description of the SL translation process, and includes too few professionals to generalize. It is enough though, to start to build a first specification for SL CAT software based on the methodology observed in practice during the experimentations. The experimentations are worth replicating with more subjects to refine the observations, and thus the software specifications. Also, these were interactive translations as the translators worked in pairs. Most of the observations are based on discussions between the two, so it would be interesting to compare those results with translators working on their own, when they cannot rely on a colleague for feedback.
Specifications for the interface
These experiments provided valuable clues to start a specification for CAT software dedicated to SL. This study served as a reference for the elaboration and the planning of other studies.
The TM tool is a key feature in current CAT software for text-to-text translation, and its concept was mentioned during the brainstorming session. However, it is impossible to directly adapt it for SL because of two major points. First, SL translations are video files. Contrary to text segments, videos are not queriable, difficult to edit, and are greedy in terms of storage space. Secondly, where text-to-text TM stores texts alignments and allow the translator to reuse them in a straight forward manner in a current translation task, this is not possible with video segments. It would result in a patchwork of different signers or outfits, and lead to a very poor translation. In parallel, a strong need for previously translated content or more generally searches for SL examples was also expressed and observed. To answer those two questions, we choose to build a SL concordancer. A concordancer is, regardless of the languages, a search engine which can look through corpora and list every occurrences of a queried word. When it comes to translation, bilingual concordancers are used. The query is done in a source language, and results are provided with an aligned translation in target language, in our case SL. Based on a parallel corpus, users can look-up for words or expressions to get examples of them in context.
The elaboration of the concordancer itself and its associated data base are the topics of other articles referenced below (Kaczmarek and Filhol 2020a, b). The concordancer also includes an alignment function for anybody that would wish to help us grow the data base. The user simply selects the content in which requires alignment, and with a few clicks, create new alignments to browse with the concordancer.
It is currently available on-line, at the following address: platform.postlab.fr. You can either test it using a public test account, or create your personal account on the website. In either case, contact us via e-mailto: firstname.lastname@example.org so we can provide you with the information needed (Fig. 6).
The other strong observation drawn from the results concerns the segmentation and ordering process, and what we called the principle of linearity. We saw that it does not apply when it comes to SL translation, therefore SL translators should be able to freely segment the text as they wish. And because they need to manipulate sequences of the source discourse, those segments must remain freely organisable. To fit those needs, we developed a flexible interface idea.
In current CAT tools, the source text is automatically segmented. The segments can only be filled with text, and their order cannot be modified (same as the source text order). In this interface, the user is free to create his own segments, as many as needed. They are pop-up boxes, which can be moved and reordered as many times as wanted, and filled with various types of content: text, but also video, drawings, pictures… This point has been previously presented in another paper (Kaczmarek and Filhol 2019).
Both elements cited above are features to be parts of a fully integrated translation environment, along with other tools such as NLP features to help marking the text, or encyclopedic research automation. Indeed, those tools have also been mentioned during our experiments.
The goal of this study was to identify the steps of the translation process from written language to SL. Both quantitative and qualitative data were collected in order to refine the most accurate description as possible. Because there are currently no existing tools to support SL translation, we started from the analysis of text-to-text CAT software and compared it with the participants’ practices and insights. The results from the two experiments were the same: ordering is the most important part of the translation process, and translators express the need to access either previously translated content examples or at least signed resources. We built an on-line concordancer, which stores in context SL parallel content. This concordancer aims to be a participative tool, meaning translators can update some alignments to make the data base grow and therefore the tool more efficient. To ease the order task, we are working on a freely movable boxes interface to serve as a draft during the translation process. Users can fill those boxes with any type of content while they process the text (written content, pictures, maps, videos), and move them around to modify the order without starting again from scratch. A prototype of this interface will soon be under development. Both tools presented here are to be integrated in a full translation environment.
This study is one, if not the first to work not only on CAT for SL, but also on the SL translation process. We are working with professionals from whom we are collecting feedbacks. We hope to foster interest in the subject, and to have the opportunity to pursue those experiments with more professionals and more translations to annotate.
SDL Trados, MemoQ, OmegaT, Wordfast, STAR Transit...
Barberis D et al (2010) Language resources for computer assisted translation from italian to italian sign language of deaf people. In: Proceedings of the AEGIS workshop and international conference
Bertoldi N (2010) On the creation and the annotation of a large-scale Italian-LIS parallel corpus. In: Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, pp 19–22
Bukhari J (2015) American sign language translationthrough sensory Glove. SignSpeak Int J u-and e-Serv 8(1):131–142
Cuxac C, Sallandre M (2007) Iconicity and arbitrariness in French sign language: highly iconic structures, degenerated iconicity and diagrammatic iconicity, in Empirical approaches to language typology 36:13
Filhol M, Tannier X (2014). Construction of a French-LSF corpus, building and using comparable corpora. In: Proceedings of Language Resources and Evaluation Conference. Island.
Halawani SM (2008) Arabic sign language translation system on mobile devices. Int J Comput Sci Network Security 8:251–256
Kaczmarek M, Filhol M (2019) Assisting sign language translation: what interface given the lack of written form and spatial grammar? In: Proceedings of translating and the computer, vol 41, pp 83–93
Kaczmarek M, Filhol M (2020a) Alignments data base for a sign language concordancer. In: Proceedings of 12th international conference on language resources and evaluation (LREC 2020), Marseille, pp 6072–6072
Kaczmarek M, Filhol M (2020b) Use cases for a sign language concordancer. In: Proceedings of 9th workshop on the representation and processing of sign languages resources in the service of the language community, technological challenge and application perspectives (RPSL2020), Marseille, pp 113–116
Koehn P (2009) A process study of computed aided translation. Kluwer Academic Publisher
Lagoudaki E (2006) Translation Memory Survey 2006: users’ perceptions around TM use. In: proceedings of Translating and the Computer. vol 28
O’Hagan M (2009) Computer-aided Translation(CAT). In: Baker, Mona/SaldanhaGabriela (eds), RoutledgeEncyclopedia of Translation Studies, London and New York, Routledge, pp 48–51
UN General Assembly (2008) Convention on the rights of persons with disabilities: resolution/adopted by the General Assembly, A/RES/61/106
Wolfe R, Ethimious E, Glauert J, Hanke T, McDonald J, Schnepp J (Eds) (2016). Special issue: recent advances in sign language translation and avatar technology. Springer International Publishing
This work has been funded by the Bpifrance invest- ment project "Grands défis du numérique", as part of the ROSETTA project (RObot for Subtitling and intElligent adapTed TranslAtion).
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Longer text example (French)
L'un des avocats de Jean-Louis Gergorin, Me Paul-Albert Iweins, a expliqué mercredi à la cour d'appel de Paris qu'il comprenait que Dominique de Villepin n'ait pas dit toute la vérité dans l'affaire Clearstream car il devait protéger le président Chirac. Jean-Louis Gergorin et le général Philippe Rondot affirment qu'à plusieurs reprises en 2004, Dominique de Villepin s'est recommandé d'instructions du président de la République pour enquêter sur les listings Clearstream qui mettaient notamment en cause Nicolas Sarkozy. "C'est tout à son honneur: quand on est ministre, on n'implique pas le chef de l'Etat", a plaidé Me Iweins. Quant aux rendez-vous que Dominique de Villepin nie avoir eus avec Jean-Louis Gergorin au premier semestre 2004, "je sais pourquoi ces rendez-vous ne sont pas reconnus", a-t-il dit, "car Jean-Louis Gergorin n'en a pas parlé au départ", lors de ses premières dépositions. Aux yeux de l'avocat, il est donc normal que Dominique de Villepin, lorsqu'il est entendu, "confirme les déclarations de Jean-Louis Gergorin, mais il ne va pas aller plus loin". "Mettez-vous à sa place", a-t-il supplié, en rappelant la frénésie médiatique de l'époque. Ainsi, à la sortie de la garde à vue de Jean-Louis Gergorin, en juin 2006, l'avocat se souvient avoir été assailli par une nuée de caméras, avec une seule question à la bouche: "Est-ce que votre client met en cause Dominique de Villepin? Si j'avais dit oui, le gouvernement sautait."
Longer text example (English)
One of Jean-Louis Gergorin's lawyers, Paul-Albert Iweins, explained on Wednesday at the Paris Court of Appeal that he understood that Dominique de Villepin did not tell the whole truth in the Clearstream case because he had to protect President Chirac. Jean-Louis Gergorin and General Philippe Rondot said that on several occasions in 2004, Dominique de Villepin recommended himself for instructions from the President of the Republic to investigate the Clearstream listings that implicated Nicolas Sarkozy. "It's to his credit that when you're a minister, you don't involve the head of state," Iweins said. As for the meetings that Dominique de Villepin denies having had with Jean-Louis Gergorin in the first half of 2004, "I know why these meetings are not recognised," he said, "because Jean-Louis Gergorin did not talk about them at the beginning," during his first depositions. In the eyes of the lawyer, it is therefore normal that Dominique de Villepin, when he is heard, "confirms Jean-Louis Gergorin's statements, but he is not going to go any further". "Put yourself in his place," he begged, recalling the media frenzy. When Jean-Louis Gergorin was released from police custody in June 2006, the lawyer remembers being assaulted by a swarm of cameras with only one question on his lips: "Is your client implicating Dominique de Villepin? If I had said yes, the government would have collapsed."
About this article
Cite this article
Kaczmarek, M., Filhol, M. Computer-assisted sign language translation: a study of translators’ practice to specify CAT software. Machine Translation 35, 305–322 (2021). https://doi.org/10.1007/s10590-021-09278-w
- Sign language
- Computer assisted translation