Skip to main content

Examining Lexical Alignment in Human-Agent Conversations with GPT-3.5 and GPT-4 Models

  • Conference paper
  • First Online:
Chatbot Research and Design (CONVERSATIONS 2023)

Abstract

This study employs a quantitative approach to investigate lexical alignment in human-agent interactions involving GPT-3.5 and GPT-4 language models. The research examines alignment performances across different conversational contexts and compares the performance of the two models. The findings highlight the significant improvements in GPT-4’s ability to foster lexical alignment, and the influence of conversation topics on alignment patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://mistral.ai/.

  2. 2.

    https://spacy.io/.

References

  1. Amaro, I., Della Greca, A., Francese, R., Tortora, G., Tucci, C.: AI unreliable answers: a case study on ChatGPT. In: Degen, H., Ntoa, S. (eds.) Artificial Intelligence in HCI, HCII 2023. LNCS, vol. 14051, pp. 23–40. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-35894-4_2

  2. Bakshi, S., Batra, S., Heidari, P., Arun, A., Jain, S., White, M.: Structure-to-text generation with self-training, acceptability classifiers and context-conditioning for the GEM shared task. In: Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics, GEM 2021, pp. 136–147 (2021)

    Google Scholar 

  3. Dubuisson Duplessis, G., Langlet, C., Clavel, C., Landragin, F.: Towards alignment strategies in human-agent interactions based on measures of lexical repetitions. Lang. Resour. Eval. 55, 353–388 (2021)

    Article  Google Scholar 

  4. Dušek, O., Jurcicek, F.: A context-aware natural language generator for dialogue systems. In: Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 185–190 (2016)

    Google Scholar 

  5. Friedberg, H., Litman, D., Paletz, S.B.: Lexical entrainment and success in student engineering groups. In: 2012 IEEE Spoken Language Technology workshop (SLT), pp. 404–409. IEEE (2012)

    Google Scholar 

  6. Healey, P.G., Purver, M., Howes, C.: Divergence in dialogue. PLOS One 9(6), e98598 (2014)

    Article  Google Scholar 

  7. Janarthanam, S., Lemon, O.: A Wizard-of-Oz environment to study referring expression generation in a situated spoken dialogue task. In: Proceedings of the 12th European Workshop on Natural Language Generation, ENLG 2009, pp. 94–97 (2009)

    Google Scholar 

  8. Koubaa, A.: GPT-4 vs. GPT-3.5: a concise showdown (2023). https://doi.org/10.36227/techrxiv.22312330.v2

  9. Koulouri, T., Lauria, S., Macredie, R.D.: Do (and say) as I say: linguistic adaptation in human-computer dialogs. Hum. Comput. Interact. 31(1), 59–95 (2016)

    Article  Google Scholar 

  10. Kühne, V., Rosenthal-von der Pütten, A.M., Krämer, N.C.: Using linguistic alignment to enhance learning experience with pedagogical agents: the special case of dialect. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS (LNAI), vol. 8108, pp. 149–158. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40415-3_13

    Chapter  Google Scholar 

  11. Le Scao, T., et al.: What language model to train if you have one million GPU hours? In: Findings of the Association for Computational Linguistics, EMNLP 2022, pp. 765–782 (2022)

    Google Scholar 

  12. Linnemann, G.A., Jucks, R.: ‘Can I trust the spoken dialogue system because it uses the same words as I do?’ - influence of lexically aligned spoken dialogue systems on trustworthiness and user satisfaction. Interact. Comput. 30(3), 173–186 (2018)

    Article  Google Scholar 

  13. Lopes, J., Eskénazi, M., Trancoso, I.: Automated two-way entrainment to improve spoken dialog system performance. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8372–8376 (2013)

    Google Scholar 

  14. Lopes, J., Eskenazi, M., Trancoso, I.: From rule-based to data-driven lexical entrainment models in spoken dialog systems. Comput. Speech Lang. 31(1), 87–112 (2015)

    Article  Google Scholar 

  15. OpenAI: GPT-4 technical report (2023). https://doi.org/10.48550/ARXIV.2303.08774

  16. Pickering, M.J., Garrod, S.: Toward a mechanistic psychology of dialogue. Behav. Brain Sci. 27(2), 169–190 (2004)

    Article  Google Scholar 

  17. Pickering, M.J., Garrod, S.: Alignment as the basis for successful communication. Res. Lang. Comput. 4, 203–228 (2006)

    Article  Google Scholar 

  18. Reitter, D., Moore, J.D.: Alignment and task success in spoken dialogue. J. Mem. Lang. 76, 29–46 (2014)

    Article  Google Scholar 

  19. Shi, Z., Sen, P., Lipani, A.: Lexical entrainment for conversational systems (2023). https://doi.org/10.48550/arXiv.2310.09651

  20. Sinha, T., Cassell, J.: We click, we align, we learn: impact of influence and convergence processes on student learning and rapport building. In: Proceedings of the 1st Workshop on Modeling INTERPERsonal SynchrONy And Influence, pp. 13–20 (2015)

    Google Scholar 

  21. Spillner, L., Wenig, N.: Talk to me on my level-linguistic alignment for chatbots. In: Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, pp. 1–12 (2021)

    Google Scholar 

  22. Srivastava, S., Theune, M., Catala, A.: The role of lexical alignment in human understanding of explanations by conversational agents. In: Proceedings of the 28th International Conference on Intelligent User Interfaces, pp. 423–435 (2023)

    Google Scholar 

  23. Touvron, H., et al.: LLaMA: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)

  24. Wang, S., Scells, H., Koopman, B., Zuccon, G.: Can ChatGPT write a good boolean query for systematic review literature search? In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, pp. 1426–1436. Association for Computing Machinery, New York (2023)

    Google Scholar 

Download references

Acknowledgements

Contribution from the ITN project NL4XAI (Natural Language for Explainable AI). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860621. This document reflects the views of the author(s) and does not necessarily reflect the views or policy of the European Commission. The REA cannot be held responsible for any use that may be made of the information this document contains.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumit Srivastava .

Editor information

Editors and Affiliations

Appendix

Appendix

Following are the prompts used to supervise the GPT models responses in the two types of conversations.

  1. 1.

    Non-task oriented conversation about hobbies.

    I’d like to engage in a casual conversation about hobbies with you. I want you to follow these rules:

    1. (a)

      Respond conversationally as a human peer would, ensure consistent engagement and avoid monologues. Do not give too much extra information as if you are retrieving information.

    2. (b)

      Ask maximum one question per response to maintain the conversation.

    3. (c)

      Don’t end the conversation; always seek ways to continue.

    4. (d)

      If asked a question, respond as a human would, avoiding references to being an AI.

    5. (e)

      I will start the first sentence.

    Are you ready to begin?

  2. 2.

    Task oriented conversation about storytelling.

    I’d like to collaborate with you on storytelling. We’ll co-create a story, each adding a few sentences in turns. Please follow these rules:

    1. (a)

      Add a few sentences per turn, ensuring your contributions are concise yet meaningful.

    2. (b)

      Match your storytelling style to my input without being overly literary.

    3. (c)

      Differentiate between story contributions and meta-comments. If I share feedback or ideas, respond conversationally before we continue the story.

    4. (d)

      Avoid dominating the story or repeating my contributions. Build upon what I’ve added.

    5. (e)

      I will start the first sentence.

    Are you ready to begin?

Following are the examples of conversations between a human and GPT on the two different tasks. Note that the conversations shown below are not from the actual participants in our experiment. This is because they did not give explicit consent for this. The examples are representative illustrations created using the respective GPT models to reflect the nature and pattern of interactions observed during the study.

  1. 1.

    With GPT-3.5 on a non-task oriented conversation.

     : I’m planning a trip to the mountains. The serene atmosphere always calms me down.  : It’s a great choice to visit the mountains, especially if you’re seeking peace and a break from the hustle and bustle. Mountains, with their vast landscapes and varying altitudes, provide a unique sense of tranquillity that’s hard to find elsewhere. Are you looking to go somewhere you’ve been before or exploring new peaks?  : Familiar spot for sure. The fresh air and the stillness of the peaks are like a yearly retreat for me.  : It’s wonderful how the mountains, with their crisp air and majestic peaks, become a personal retreat for many. Revisiting them is like rekindling a bond. Do you engage in any particular activities when you’re there?

  2. 2.

    With GPT-3.5 on a task oriented (storytelling) conversation.

     : Let’s create a story about Emily, an ambitious archaeologist, seeking the hidden city known as the Oasis of Golden Sands.  : Ah, the story of Emily, venturing into the depths of the unknown, the treacherous landscapes, searching high and low for the Oasis of Golden Sands, a city that, as legends often say, disappeared mysteriously many eons ago, leaving behind tales of vast treasures and complex mazes designed to protect them.  : She believes her family’s ancient scroll might have clues to navigate the dangerous Desert of Silent Whispers.  : That very scroll, meticulously preserved over countless generations, is filled with symbols of a forgotten language and contains markings that hint at hidden pathways. Many believe, especially within the archaeology community, that it offers detailed guidance to traverse the Desert of Silent Whispers. This desert, vast in its expanse, is not only known for its shifting trails but also for the elusive murmurs echoing from its depths.  : With such a guide, Emily hopes to surpass past explorers and finally uncover the city’s heart.

  3. 3.

    With GPT-4 on a task oriented (storytelling) conversation.

     : Let’s weave a tale about Emily, a daring archaeologist, on her journey to discover the fabled city, the Oasis of Golden Sands.  : Emily, perched on the cusp of legend and reality, seeks the Oasis—a beacon lost in time, waiting for the one who can decode its riddles.  : Guided by the beacon-like glow of her inherited scroll, she believes its secrets can unlock the Desert of Silent Whispers, leading straight to the legendary city.  : In that vast tapestry of desert, tales of old intertwine with shifting sands, becoming a puzzle that Emily, and those before her, yearn to solve—each grain bearing witness to epic quests for the Oasis’ chronicles.  : Along her path, Emily encounters enigmatic desert dwellers, the very threads of this living tapestry, each holding a piece of the grand puzzle she strives to complete.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, B., Theune, M., Srivastava, S. (2024). Examining Lexical Alignment in Human-Agent Conversations with GPT-3.5 and GPT-4 Models. In: Følstad, A., et al. Chatbot Research and Design. CONVERSATIONS 2023. Lecture Notes in Computer Science, vol 14524. Springer, Cham. https://doi.org/10.1007/978-3-031-54975-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-54975-5_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-54974-8

  • Online ISBN: 978-3-031-54975-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics