Abstract
Large language models (LLMs) have recently revolutionized performance on a variety of natural language generation tasks, but have yet to be studied in terms of their potential for generating reasonable character choices as well as subsequent decisions and consequences given a narrative context. We use recent (not yet available for LLM training) film plot excerpts as an example initial narrative context and explore how different prompt formats might affect narrative choice generation by open-source LLMs. The results provide a first step toward understanding effective prompt engineering for future human-AI collaborative development of interactive narratives.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Akoury, N., Wang, S., Whiting, J., Hood, S., Peng, N., Iyyer, M.: Storium: a dataset and evaluation platform for machine-in-the-loop story generation. arXiv preprint arXiv:2010.01717 (2020)
Ammanabrolu, P., et al.: Story realization: expanding plot events into sentences. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 7375ā7382 (2020)
Anand, Y., Nussbaum, Z., Duderstadt, B., Schmidt, B., Mulyar, A.: Gpt4all: training an assistant-style chatbot with large scale data distillation from GPT-3.5-turbo. GitHub (2023)
Asimov, I.: I, Robot, vol. 1. Spectra (2004)
Barber, H., Kudenko, D.: Generation of adaptive dilemma-based interactive narratives. IEEE Trans. Comput. Intell. AI Games 1(4), 309ā326 (2009)
Braff, Z.D.: A Good Person (2023)
Brants, T., Popat, A.C., Xu, P., Och, F.J., Dean, J.: Large language models in machine translation (2007)
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877ā1901 (2020)
Calderwood, A., Wardrip-Fruin, N., Mateas, M.: Spinning coherent interactive fiction through foundation model prompts. In: ICCC (2022)
Elnahla, N.: Black mirror: Bandersnatch and how Netflix manipulates us, the new gods. Consumption Markets Cult. 23(5), 506ā511 (2020)
Fan, A., Lewis, M., Dauphin, Y.: Hierarchical neural story generation. arXiv preprint arXiv:1805.04833 (2018)
Freiknecht, J., Effelsberg, W.: Procedural generation of interactive stories using language models. In: Proceedings of the 15th International Conference on the Foundations of Digital Games, pp. 1ā8 (2020)
Frich, J., MacDonald Vermeulen, L., Remy, C., Biskjaer, M.M., Dalsgaard, P.: Mapping the landscape of creativity support tools in HCI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1ā18 (2019)
Garcia, L., Martens, C.: Carambola: enforcing relationships between values in value-sensitive agent design. In: Vosmeer, M., Holloway-Attaway, L. (eds.) Interactive Storytelling. ICIDS 2022. LNCS, vol. 13762, pp. 83ā90. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-22298-6_5
Harmon, S.: An expressive dilemma generation model for players and artificial agents. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 12, pp. 176ā182 (2016)
Holl, E., Melzer, A.: Moral minds in gaming: a quantitative case study of moral decisions in detroit: become human. J. Media Psychol. Theor. Methods Appl. 34(5), 287ā298 (2021)
Keskar, N.S., McCann, B., Varshney, L.R., Xiong, C., Socher, R.: Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019)
Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. Adv. Neural. Inf. Process. Syst. 35, 22199ā22213 (2022)
Kolhoff, L., Nack, F.: How relevant is your choice? In: Cardona-Rivera, R.E., Sullivan, A., Young, R.M. (eds.) ICIDS 2019. LNCS, vol. 11869, pp. 73ā85. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33894-7_9
Kreminski, M., Mateas, M.: A coauthorship-centric history ofĀ interactive emergent narrative. In: Mitchell, A., Vosmeer, M. (eds.) ICIDS 2021. LNCS, vol. 13138, pp. 222ā235. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92300-6_21
Lanzi, P.L., Loiacono, D.: ChatGPT and other large language models as evolutionary engines for online interactive collaborative game design. arXiv preprint arXiv:2303.02155 (2023)
Mateas, M., Mawhorter, P.A., Wardrip-Fruin, N.: Intentionally generating choices in interactive narratives. In: ICCC, pp. 292ā299 (2015)
Nichols, E., Gao, L., Gomez, R.: Collaborative storytelling with large-scale neural language models. In: Proceedings of the 13th ACM SIGGRAPH Conference on Motion, Interaction and Games, pp. 1ā10 (2020)
Reynolds, L., McDonell, K.: Prompt programming for large language models: beyond the few-shot paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1ā7 (2021)
Roemmele, M., Gordon, A.S.: Creative help: a story writing assistant. In: Schoenau-Fog, H., Bruni, L.E., Louchart, S., Baceviciute, S. (eds.) ICIDS 2015. LNCS, vol. 9445, pp. 81ā92. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27036-4_8
Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: AutoPrompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)
Swanson, R., Gordon, A.S.: Say anything: a massively collaborative open domain story writing companion. In: Spierling, U., Szilas, N. (eds.) ICIDS 2008. LNCS, vol. 5334, pp. 32ā40. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89454-4_5
Touvron, H., et al.: LLaMA: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural. Inf. Process. Syst. 35, 24824ā24837 (2022)
White, J., et al.: A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv preprint arXiv:2302.11382 (2023)
Wu, T., Terry, M., Cai, C.J.: AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In: Proceedings of the 2022 CHI conference on Human Factors in Computing Systems, pp. 1ā22 (2022)
Xu, R., Zhu, C., Zeng, M.: Narrate dialogues for better summarization. In: Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 3565ā3575 (2022)
Ye, J., et al.: A comprehensive capability analysis of GPT-3 and GPT-3.5 series models. arXiv preprint arXiv:2303.10420 (2023)
Yuan, A., Coenen, A., Reif, E., Ippolito, D.: WordCraft: story writing with large language models. In: 27th International Conference on Intelligent User Interfaces, pp. 841ā852 (2022)
Zamfirescu-Pereira, J., Wong, R.Y., Hartmann, B., Yang, Q.: Why Johnny canāt prompt: how non-AI experts try (and fail) to design LLM prompts. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1ā21 (2023)
Zhang, N., et al.: Differentiable prompt makes pre-trained language models better few-shot learners. arXiv preprint arXiv:2108.13161 (2021)
Zhou, Y., et al.: Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910 (2022)
Zhou, Y., Zhao, Y., Shumailov, I., Mullins, R., Gal, Y.: Revisiting automated prompting: are we actually doing better? arXiv preprint arXiv:2304.03609 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
6 Appendix
6 Appendix
This appendix includes descriptions and examples of each type of failure category observed in the choice/decision/consequence generation task. FigureĀ 3 provides a sample prompt with an invented plot excerpt that serves as a running example as each failure type is considered in TablesĀ 5, 6 and 7. An example of a successful response is provided in Fig.Ā 4.
Rights and permissions
Copyright information
Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Harmon, S., Rutman, S. (2023). Prompt Engineering forĀ Narrative Choice Generation. In: Holloway-Attaway, L., Murray, J.T. (eds) Interactive Storytelling. ICIDS 2023. Lecture Notes in Computer Science, vol 14383. Springer, Cham. https://doi.org/10.1007/978-3-031-47655-6_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-47655-6_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47654-9
Online ISBN: 978-3-031-47655-6
eBook Packages: Computer ScienceComputer Science (R0)