Skip to main content

Enabling Human-Centered Machine Translation Using Concept-Based Large Language Model Prompting and Translation Memory

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14736))

Included in the following conference series:

  • 91 Accesses

Abstract

This study evaluates a novel human-machine collaborative machine translation workflow, enhanced by Large Language Model features, including pre-editing instructions, interactive concept-based post-editing, and the archiving of concepts in post-editing and translation memories. By implementing GPT-4 prompts for concept-based steering in English-to-Chinese translation, we explore its effectiveness compared to traditional machine translation methods such as Google Translate, human translators, and an alternative human-machine collaboration approach that utilizes human-generated reference texts instead of concept description. Our findings suggest that while GPT-4’s discourse-level analysis and augmented instructions show potential, they do not surpass the nuanced understanding of human translators at the sentence-level. However, GPT-4 augmented concept-based interactive post-editing significantly outperforms both traditional methods and the alternative method relying on human reference translations. In testing English-to-Chinese translation concepts, GPT-4 effectively elucidates nearly all concepts, precisely identifies the relevance of concepts within source texts, and accurately translates into target texts embodying the related concepts. Nevertheless, some complex concepts require more sophisticated prompting techniques, such as Chain-of-Thought, or pre-editing strategies, like explicating linguistic patterns, to achieve optimal performance. Despite GPT-4’s capabilities, human language experts possess superior abductive reasoning capabilities. Consequently, at the present stage, humans must apply abductive reasoning to create more specific instructions and develop additional logic steps in prompts, which complicate the prompt engineering process. Eventually, enhanced abductive reasoning capabilities in large language models will bring their performance closer to human-like levels. The proposed novel approach introduces a scalable, concept-based strategy that can be applied across multiple text segments, enhancing machine translation workflow efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.thecut.com/2020/03/book-excerpt-samantha-irbys-wow-no-thank-you.html.

References

  1. Jiao, W., Wang, W., Huang, JT., Wang, X., Tu, Z.P.: Is ChatGPT a good translator? Yes with GPT-4 as the engine (2023). arXiv preprint arXiv:2301.08745

  2. Roman, K.: How GPT-4 Is Transforming the Language Service Industry: Benefits and Challenges. https://www.linkedin.com/pulse/how-gpt-4-transforming-language-service-industry-benefits-kotzsch/. Accessed 7 Feb 2024

  3. O’Brien, S.: Translation as human–computer interaction. Transl. Spaces 1(1), 101–22 (2012)

    Google Scholar 

  4. Do, Carmo, F., et al.: A review of the state-of-the-art in automatic post-editing. Mach. Transl. 35, 101–143 (2021)

    Google Scholar 

  5. MemoqDocs: Match rates from translation memories and LiveDocs corpora. https://docs.memoq.com/current/en/Things/things-match-rates-from-translation-m.html. Accessed 12 Feb 2024

  6. Christensen, T.P., Schjoldage, A.: Translation-memory (TM) research: what do we know and how do we know it? Hermes-J. Lang. Commun. Bus. 44, 89–101 (2010)

    Google Scholar 

  7. CSOFT International Blog Post: 5 Limitations of Machine Translation. https://blog.csoftintl.com/limitations-machine-translation/. Accessed 12 Feb 2024

  8. Translation & Interpretation Services Blog Post: The Advantages and Disadvantages of Machine Translation. https://translationsandinterpretations.com.au/blog/the-advantages-and-disadvantages-of-machine-translation/. Accessed 12 Feb 2024

  9. Qian, M., Wu, H.Q., Yang, L., Wan, A.: Augmented Machine Translation Enabled by GPT4: Performance Evaluation on Human-Machine Teaming Approaches. NLP4TIA (2023)

    Google Scholar 

  10. Bao, C.Y.: Class notes on Chinese-to-English translation. Advanced Translation and Interpretation Course. Translation and Interpretation Program, Middlebury Institute of International Studies (2022)

    Google Scholar 

  11. Chen, J.S.: Pro-drop parameter, Universal Grammar and second language acquisition of Chinese and English, Doctoral dissertation, UNSW Sydney (2001)

    Google Scholar 

  12. Chen, H.: Gaoji Hanying Fanyi [An advanced coursebook on Chinese-English translation], PEKING University Press (2009)

    Google Scholar 

  13. StiegelbauerT, L.R., Schawarz, N., Husar, D.B.: Three Translation Model Approaches. Studii de Ştiintă şi Cultură. 12(3), 45–50 (2016)

    Google Scholar 

  14. Tan, X., Kuang, S., Xiong, D.: Detecting and translating dropped pronouns in neural machine translation. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11838, pp. 343–354. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32233-5_27

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Qian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qian, M., Kong, C. (2024). Enabling Human-Centered Machine Translation Using Concept-Based Large Language Model Prompting and Translation Memory. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2024. Lecture Notes in Computer Science(), vol 14736. Springer, Cham. https://doi.org/10.1007/978-3-031-60615-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60615-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60614-4

  • Online ISBN: 978-3-031-60615-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics