Abstract
This study evaluates a novel human-machine collaborative machine translation workflow, enhanced by Large Language Model features, including pre-editing instructions, interactive concept-based post-editing, and the archiving of concepts in post-editing and translation memories. By implementing GPT-4 prompts for concept-based steering in English-to-Chinese translation, we explore its effectiveness compared to traditional machine translation methods such as Google Translate, human translators, and an alternative human-machine collaboration approach that utilizes human-generated reference texts instead of concept description. Our findings suggest that while GPT-4’s discourse-level analysis and augmented instructions show potential, they do not surpass the nuanced understanding of human translators at the sentence-level. However, GPT-4 augmented concept-based interactive post-editing significantly outperforms both traditional methods and the alternative method relying on human reference translations. In testing English-to-Chinese translation concepts, GPT-4 effectively elucidates nearly all concepts, precisely identifies the relevance of concepts within source texts, and accurately translates into target texts embodying the related concepts. Nevertheless, some complex concepts require more sophisticated prompting techniques, such as Chain-of-Thought, or pre-editing strategies, like explicating linguistic patterns, to achieve optimal performance. Despite GPT-4’s capabilities, human language experts possess superior abductive reasoning capabilities. Consequently, at the present stage, humans must apply abductive reasoning to create more specific instructions and develop additional logic steps in prompts, which complicate the prompt engineering process. Eventually, enhanced abductive reasoning capabilities in large language models will bring their performance closer to human-like levels. The proposed novel approach introduces a scalable, concept-based strategy that can be applied across multiple text segments, enhancing machine translation workflow efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jiao, W., Wang, W., Huang, JT., Wang, X., Tu, Z.P.: Is ChatGPT a good translator? Yes with GPT-4 as the engine (2023). arXiv preprint arXiv:2301.08745
Roman, K.: How GPT-4 Is Transforming the Language Service Industry: Benefits and Challenges. https://www.linkedin.com/pulse/how-gpt-4-transforming-language-service-industry-benefits-kotzsch/. Accessed 7 Feb 2024
O’Brien, S.: Translation as human–computer interaction. Transl. Spaces 1(1), 101–22 (2012)
Do, Carmo, F., et al.: A review of the state-of-the-art in automatic post-editing. Mach. Transl. 35, 101–143 (2021)
MemoqDocs: Match rates from translation memories and LiveDocs corpora. https://docs.memoq.com/current/en/Things/things-match-rates-from-translation-m.html. Accessed 12 Feb 2024
Christensen, T.P., Schjoldage, A.: Translation-memory (TM) research: what do we know and how do we know it? Hermes-J. Lang. Commun. Bus. 44, 89–101 (2010)
CSOFT International Blog Post: 5 Limitations of Machine Translation. https://blog.csoftintl.com/limitations-machine-translation/. Accessed 12 Feb 2024
Translation & Interpretation Services Blog Post: The Advantages and Disadvantages of Machine Translation. https://translationsandinterpretations.com.au/blog/the-advantages-and-disadvantages-of-machine-translation/. Accessed 12 Feb 2024
Qian, M., Wu, H.Q., Yang, L., Wan, A.: Augmented Machine Translation Enabled by GPT4: Performance Evaluation on Human-Machine Teaming Approaches. NLP4TIA (2023)
Bao, C.Y.: Class notes on Chinese-to-English translation. Advanced Translation and Interpretation Course. Translation and Interpretation Program, Middlebury Institute of International Studies (2022)
Chen, J.S.: Pro-drop parameter, Universal Grammar and second language acquisition of Chinese and English, Doctoral dissertation, UNSW Sydney (2001)
Chen, H.: Gaoji Hanying Fanyi [An advanced coursebook on Chinese-English translation], PEKING University Press (2009)
StiegelbauerT, L.R., Schawarz, N., Husar, D.B.: Three Translation Model Approaches. Studii de Ştiintă şi Cultură. 12(3), 45–50 (2016)
Tan, X., Kuang, S., Xiong, D.: Detecting and translating dropped pronouns in neural machine translation. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11838, pp. 343–354. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32233-5_27
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Qian, M., Kong, C. (2024). Enabling Human-Centered Machine Translation Using Concept-Based Large Language Model Prompting and Translation Memory. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2024. Lecture Notes in Computer Science(), vol 14736. Springer, Cham. https://doi.org/10.1007/978-3-031-60615-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-60615-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60614-4
Online ISBN: 978-3-031-60615-1
eBook Packages: Computer ScienceComputer Science (R0)