Skip to main content

Prompt Engineering in Large Language Models

  • Conference paper
  • First Online:
Data Intelligence and Cognitive Informatics (ICDICI 2023)

Abstract

With the undeniable rapid development of Conversational Artificial Intelligence (AI) particularly Large Language Models (LLMs), prompt engineering has become an obligatory skill for effective communication and interaction with language driven tools like ChatGPT. It can be leveraged in enforcing rules and process automation for ensuring good quality and quantity of output from LLMs. Moreover, the order of providing examples within prompts, automatic instruction generation, and selection methods has been proven to significantly impact the performance of LLMs. Prompts can be optimized to maximize a chosen score function by searching a pool of instruction candidates within LLMs. No wonder automatically generated instructions give better or similar performance than human annotated instructions and outperform baselines of LLMs, this makes prompt engineering a programming procedure for customizing outputs and interactions of LLMs. In this chapter, we provide thorough understanding of prompt engineering, latest prompt engineering techniques with relevant exercises for putting the techniques in practice. We also discuss current and future trends of LLMs and prompt engineering research, including the rise of automatic instruction generation and selection methods. These are very important for prompt and NLP engineers, conversational AI researchers, and all information seekers or users of LLMs and prompt engineering tools in sensitive domains like health care, security, education among others. The chapter provides indepth understanding of prompt engineering principles and techniques for responsible coversational AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 279.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brants T, Popat AC, Xu P, Och FJ, Dean J (2023) Large language models in machine translation. Research.google. Online. Available: http://research.google/pubs/pub33278.pdf. Accessed 01 May 2023

  2. Du Y et al (2023) Guiding pretraining in reinforcement learning with large language models. arXiv cs.LG

    Google Scholar 

  3. Wang Y et al (2022) AdaMix: mixture-of-adaptations for parameter-efficient model tuning. arXivcs.CL

    Google Scholar 

  4. Wei J et al (2022) Emergent abilities of large language models. arXiv cs.CL

    Google Scholar 

  5. Oppenlaender J (2022) A taxonomy of prompt modifiers for text-to-image generation. arXiv cs.MM

    Google Scholar 

  6. White J et al (2023) A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv cs.SE

    Google Scholar 

  7. Lo LS (2023) The CLEAR path: a framework for enhancing information literacy through prompt engineering. J Acad Libr 49(4):102720

    Article  Google Scholar 

  8. Short CE, Short JC (2023) The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation. J Bus Ventur Insights 19(e00388):e00388

    Article  Google Scholar 

  9. Strobelt H et al (2023) Interactive and visual prompt engineering for ad-hoc task adaptation with large language models. IEEE Trans Vis Comput Graph 29(1):1146–1156

    Google Scholar 

  10. Abukhalaf S, Hamdaqa M, Khomh F (2023) On codex prompt engineering for OCL generation: an empirical study. arXiv cs.SE

    Google Scholar 

  11. Oppenlaender J, Linder R, Silvennoinen J (2023) Prompting AI art: an investigation into the creative skill of prompt engineering. arXiv cs.HC

    Google Scholar 

  12. Chalkidis I (2023) ChatGPT may pass the bar exam soon, but has a long way to go for the LexGLUE benchmark. arXiv cs.CL

    Google Scholar 

  13. Johnson C, Rodríguez-Fernández N, Rebelo SM (2023) Artificial intelligence in music, sound, art and design. In: 12th international conference, EvoMUSART 2023, held as part of EvoStar 2023, Brno, Czech Republic, Apr 12–14, 2023, proceedings. Springer Nature, Cham, Switzerland

    Google Scholar 

  14. Shtedritski A, Rupprecht C, Vedaldi A (2023) What does CLIP know about a red circle? Visual prompt engineering for VLMs. arXiv cs.CV

    Google Scholar 

  15. Polak MP, Morgan D (2023) Extracting accurate materials data from research papers with conversational language models and prompt engineering—example of ChatGPT. arXiv cs.CL

    Google Scholar 

  16. Busch K, Rochlitzer A, Sola D, Leopold H (2023) Just tell me: prompt engineering in business process management. arXiv cs.AI

    Google Scholar 

  17. Kumar K (2023) Geotechnical parrot tales (GPT): harnessing large language models in geotechnical engineering. arXiv cs.CL

    Google Scholar 

  18. Trautmann D, Petrova A, Schilder F (2022) Legal prompt engineering for multilingual legal judgement prediction. arXiv cs.CL

    Google Scholar 

  19. Ahmed T, Pai KS, Devanbu P, Barr ET (2023) Improving few-shot prompts with relevant static analysis products. arXiv cs.SE

    Google Scholar 

  20. Diao S, Wang P, Lin Y, Zhang T (2023) Active prompting with chain-of-thought for large language models. arXiv cs.CL

    Google Scholar 

  21. Taveekitworachai P, Abdullah F, Dewantoro MF, Thawonmas R, Togelius J, Renz J (2023) ChatGPT4PCG competition: character-like level generation for science birds. arXiv cs.AI

    Google Scholar 

  22. Kather JN, Ghaffari Laleh N, Foersch S, Truhn D (2022) Medical domain knowledge in domain-agnostic generative AI. NPJ Digit Med 5(1):90

    Google Scholar 

  23. van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL (2023) ChatGPT: five priorities for research. Nature 614(7947):224–226

    Article  Google Scholar 

  24. Yang Z et al (2023) MM-REACT: prompting ChatGPT for multimodal reasoning and action. arXiv cs.CV

    Google Scholar 

  25. Khattak MU, Rasheed H, Maaz M, Khan S, Khan FS (2022) MaPLe: multi-modal prompt learning. arXiv cs.CV

    Google Scholar 

  26. Wang B, Deng X, Sun H (2022) Iteratively prompt pre-trained language models for chain of thought. arXiv cs.CL, pp 2714–2730

    Google Scholar 

  27. Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G (2023) Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput Surv 55(9):1–35

    Article  Google Scholar 

  28. Yang Z, Li Z, Zheng F, Leonardis A, Song J (2022) Prompting for multi-modal tracking. In: Proceedings of the 30th ACM international conference on multimedia

    Google Scholar 

  29. Zhu J, Lai S, Chen X, Wang D, Lu H (2023) Visual prompt multi-modal tracking. arXiv cs.CV

    Google Scholar 

  30. Maus N, Chao P, Wong E, Gardner J (2023) Adversarial prompting for black box foundation models. arXiv cs.LG

    Google Scholar 

  31. Wang Z, Panda R, Karlinsky L, Feris R, Sun H, Kim Y (2023) Multitask prompt tuning enables parameter-efficient transfer learning. arXiv cs.CL

    Google Scholar 

  32. Zhang H, Zhang X, Huang H, Yu L (2022) Prompt-based meta-learning for few-shot text classification. In: Proceedings of the 2022 conference on empirical methods in natural language processing, pp 1342–1357

    Google Scholar 

  33. Kojima T, Gu SS, Reid MM, Matsuo Y, Iwasawa Y (2022) Large language models are zero-shot reasoners. arXiv cs.CL

    Google Scholar 

  34. Köksal A, Schick T, Schütze H (2022) MEAL: stable and active learning for few-shot prompting. arXiv cs.CL

    Google Scholar 

  35. Lin J, Chen Q, Zhou J, Jin J, He L (2022) CUP: curriculum learning based prompt tuning for implicit event argument extraction. arXiv cs.CL

    Google Scholar 

  36. Zhang T, Wang X, Zhou D, Schuurmans D, Gonzalez JE (2022) TEMPERA: test-time prompting via Reinforcement learning. arXiv cs.CL

    Google Scholar 

  37. Zhou Y et al (2022) Steering large language models using APE

    Google Scholar 

  38. Zhou Y et al (2022) Large language models are human-level prompt engineers. arXiv cs.LG

    Google Scholar 

  39. Austin J et al (2021) Program synthesis with large language models. arXiv cs.PL

    Google Scholar 

  40. Sun K et al (2020) Adding chit-chat to enhance task-oriented dialogues. arXiv cs.CL

    Google Scholar 

  41. Chase H (2023) Welcome to langchain—langchain 0.0.154. Langchain.com. Online. Available: https://python.langchain.com/en/latest/index.html. Accessed 01 May 2023

  42. Dust—design and deploy large language models apps. Dust.tt. Online. Available: https://dust.tt/. Accessed 01 May 2023

  43. “OpenPrompt,” Openprompt.co. Online. Available: https://openprompt.co/. Accessed 01 May 2023

  44. “The art & science of AI prompts,” The Art & Science of AI Prompts. Online. Available: https://www.betterprompts.ai/. Accessed 01 May 2023

  45. “Promptengines.com,” Afternic.com. Online. Available: https://www.afternic.com/forsale/promptengines.com?traffic_id=GoDaddy_DLS&traffic_type=TDFS&utm_campaign=TDFS_GoDaddy_DLS&utm_medium=sn_affiliate_click&utm_source=TDFS. Accessed 01 May 2023

  46. “Promptify.Ai,” Promptify.ai. Online. Available: https://www.promptify.ai/. Accessed 01 May 2023

  47. TextBox: TextBox 2.0 is a text generation library with pre-trained language models

    Google Scholar 

  48. ThoughtSource: A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research group: https://samwald.info/

  49. G.-3 Demo, “GPT index,” Gpt3demo.com. Online. Available: https://gpt3demo.com/apps/gpt-index. Accessed 01 May 2023

  50. “llamaindex (LlamaIndex),” Huggingface.co. Online. Available: https://huggingface.co/llamaindex. Accessed 01 May 2023

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ggaliwango Marvin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Marvin, G., Hellen, N., Jjingo, D., Nakatumba-Nabende, J. (2024). Prompt Engineering in Large Language Models. In: Jacob, I.J., Piramuthu, S., Falkowski-Gilski, P. (eds) Data Intelligence and Cognitive Informatics. ICDICI 2023. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-99-7962-2_30

Download citation

Publish with us

Policies and ethics