Skip to main content
Log in

C5: toward better conversation comprehension and contextual continuity for ChatGPT

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

Large language models (LLMs), such as ChatGPT, have demonstrated outstanding performance in various fields, particularly in natural language understanding and generation tasks. In complex application scenarios, users tend to engage in multi-turn conversations with ChatGPT to keep contextual information and obtain comprehensive responses. However, human forgetting and model contextual forgetting remain prominent issues in multi-turn conversation scenarios, which challenge the users’ conversation comprehension and contextual continuity for ChatGPT. To address these challenges, we propose an interactive conversation visualization system called C5, which includes Global View, Topic View, and Context-associated Q&A View. The Global View uses the GitLog diagram metaphor to represent the conversation structure, presenting the trend of conversation evolution and supporting the exploration of locally salient features. The Topic View is designed to display all the question and answer nodes and their relationships within a topic using the structure of a knowledge graph, thereby display the relevance and evolution of conversations. The Context-associated Q&A View consists of three linked views, which allow users to explore individual conversations deeply while providing specific contextual information when posing questions. The usefulness and effectiveness of C5 were evaluated through a case study and a user study.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Ahuja K, Hada R, Ochieng M, Jain P, Diddee H, Maina S, Ganu T, Segal S, Axmed M, Bali K et al (2023) Mega: Multilingual evaluation of generative ai. arXiv preprint arXiv:2303.12528

  • Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet allocation. J Mach Learn Res 3(Jan):993–1022

    Google Scholar 

  • Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds) Proceedings of advances in neural information processing systems, vol 33. Curran Associates Inc, New York, pp 1877–1901

  • Cao Z, Li S, Liu Y, Li W, Ji H (2015) A novel neural topic model and its supervised extension. In: Proceedings of the AAAI conference on artificial intelligence, vol 29(1)

  • Castronovo S, Frey J, Poller P (2008) A generic layout-tool for summaries of meetings in a constraint-based approach. In: Machine learning for multimodal interaction: 5th international workshop, MLMI 2008, Utrecht, The Netherlands, September 8–10, 2008. Proceedings 5, Springer, pp 248–259 (2008)

  • Chang B, Sun G, Li T, Huang H, Liang R (2023) MUSE: visual analysis of musical semantic sequence. IEEE Trans Vis Comput Gr 29(9):4015–4030. https://doi.org/10.1109/TVCG.2022.3175364

    Article  Google Scholar 

  • Cowell AJ, Gregory ML, Bruce J, Haack J, Love D, Rose S, Andrew AH (2006) Understanding the dynamics of collaborative multi-party discourse. Inf Vis 5(4):250–259

    Article  Google Scholar 

  • Cui W, Liu S, Tan L, Shi C, Song Y, Gao Z, Qu H, Tong X (2011) TextFlow: towards Better Understanding of Evolving Topics in Text. IEEE Trans Vis Comput Gr 17(12):2412–2421

    Article  Google Scholar 

  • Cui W, Liu S, Wu Z, Wei H (2014) How hierarchical topics evolve in large text corpora. IEEE Trans Vis Comput Gr 20(12):2281–2290

    Article  Google Scholar 

  • Cui Y, Li C, Chen C, Liang Y, Hu Y, Wang C (2021) VineMap: a metaphor visualization method for public opinion hierarchy from text data. J Vis 24(5):1097–1111

    Article  Google Scholar 

  • Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding, , pp 4171–4186. arXiv:1810.04805

  • Dhillon IS, Modha DS (2001) Concept decompositions for large sparse text data using clustering. Mach Learn 42(1–2):143–175

    Article  Google Scholar 

  • Dieng AB, Wang C, Gao J, Paisley J (2016) TopicRNN: a recurrent neural network with long-range semantic dependency. arXiv e-prints arXiv:1611.01702

  • Ehlen P, Purver M, Niekrasz J, Lee K, Peters S (2008) Meeting adjourned: off-line learning interfaces for automatic meeting understanding. In: Proceedings of the 13th international conference on Intelligent user interfaces, pp 276–284

  • El-Assady M, Gold V, Acevedo C, Collins C, Keim D (2016) ConToVi: multi-party conversation exploration using topic-space views. In: Computer graphics forum, vol 35. Wiley Online Library, pp 431–440

  • El-Assady M, Sevastjanova R, Gipp B, Keim D, Collins C (2017) NEREx: named-entity relationship exploration in multi-party conversations. In: Computer graphics forum, vol 36. Wiley Online Library, pp 213–225 (2017)

  • Floridi L, Chiriatti M (2020) GPT-3: its nature, scope, limits, and consequences. Mind Mach 30:681–694

    Article  Google Scholar 

  • Fujiwara T, Malakar P, Reda K, Vishwanath V, Papka ME, Ma KL (2017) A visual analytics system for optimizing communications in massively parallel applications. In: Proceedings of IEEE conference on visual analytics science and technology, IEEE, pp 59–70

  • Havre S, Hetzler E, Whitney P, Nowell L (2002) ThemeRiver: visualizing thematic changes in large document collections. IEEE Trans Vis Comput Gr 8(1):9–20

    Article  Google Scholar 

  • Hendy A, Abdelrehim M, Sharaf A, Raunak V, Gabr M, Matsushita H, Kim YJ, Afify M, Awadalla HH (2023) How good are GPT models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210

  • Jacobsen B, Wallinger M, Kobourov S, Nöllenburg M (2020) MetroSets: visualizing sets as metro maps. IEEE Trans Vis Comput Gr 27(2):1257–1267

    Article  Google Scholar 

  • Jiao W, Wang W, Huang J, Wang X, Tu Z (2023) Is ChatGPT a good translator. A preliminary study. arXiv:2301.08745

  • Katharopoulos A, Vyas A, Pappas N, Fleuret F (2020) Transformers are RNNs: fast autoregressive transformers with linear attention. In: Proceedings of the 37th international conference on machine learning, vol 119. PMLR, pp 5156–5165

  • Kieffer S, Dwyer T, Marriott K, Wybrow M (2016) HOLA: human-like orthogonal network layout. IEEE Trans Vis Comput Gr 22(1):349–358

    Article  Google Scholar 

  • Kim M, Kang K, Park D, Choo J, Elmqvist N (2016) TopicLens: efficient multi-level visual topic exploration of large-scale document collections. IEEE Trans Vis Comput Graphics 23(1):151–160

    Article  Google Scholar 

  • Knittel J, Koch S, Tang T, Chen W, Wu Y, Liu S, Ertl T (2022) Real-time visual analysis of high-volume social media posts. IEEE Trans Vis Comput Gr 28(1):879–889

    Article  Google Scholar 

  • Lai VD, Ngo NT, Veyseh APB, Man H, Dernoncourt F, Bui T, Nguyen TH (2023) ChatGPT beyond english: towards a comprehensive evaluation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613

  • Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the international conference on machine learning, vol 32. PMLR, pp 1188–1196

  • Lee D, Seung HS (2000) Algorithms for non-negative matrix factorization. In: Proceedings of advances in neural information processing systems, vol 13

  • Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the annual meeting of the association for computational linguistics, pp 7871–7880

  • Li Y (2023) Unlocking context constraints of LLMs: enhancing context efficiency of LLMs with self-information-based content filtering. arXiv preprint arXiv:2304.12102

  • Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G (2023) Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput Surv 55(9):1–35

    Article  Google Scholar 

  • Liu S, Wu Y, Wei E, Liu M, Liu Y (2013) StoryFlow: tracking the Evolution of Stories. IEEE Trans Vis Comput Gr 19(12):2436–2445

    Article  Google Scholar 

  • Liu S, Yin J, Wang X, Cui W, Cao K, Pei J (2016) Online visual analytics of text streams. IEEE Trans Vis Comput Gr 22(11):2451–2466

    Article  Google Scholar 

  • Liu Y, Han T, Ma S, Zhang J, Yang Y, Tian J, He H, Li A, He M, Liu Z et al (2023) Summary of ChatGPT/GPT-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852

  • Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692

  • Loiola EM, De Abreu NMM, Boaventura-Netto PO, Hahn P, Querido T (2007) A survey for the quadratic assignment problem. Eur J Oper Res 176(2):657–690

    Article  MathSciNet  Google Scholar 

  • Madaan A, Tandon N, Clark P, Yang Y (2022) Memory-assisted prompt editing to improve GPT-3 after deployment. arXiv preprint arXiv:2201.06009

  • Madaan A, Tandon N, Gupta P, Hallinan S, Gao L, Wiegreffe S, Alon U, Dziri N, Prabhumoye S, Yang Y et al (2023) Self-Refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 (2023)

  • Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781

  • Niu Z, Zhong G, Yu H (2021) A review on the attention mechanism of deep learning. Neurocomputing 452:48–62

    Article  Google Scholar 

  • OpenAI: Gpt-4 technical report. arXiv e-prints arXiv:2303.08774 (2023)

  • Peng Y, Fan X, Chen R, Yu Z, Liu S, Chen Y, Zhao Y, Zhou F (2023) Visual abstraction of dynamic network via improved multi-class blue noise sampling. Front Comput Sci 17(1):171701

    Article  Google Scholar 

  • Pennington J, Socher R, Manning CD (2014) GloVe: global vectors for word representation. In: Proceedings of the conference on empirical methods in natural language processing, pp 1532–1543

  • Radford A, Narasimhan K, Salimans T, Sutskever I et al (2018) Improving language understanding by generative pre-training

  • Savelka J, Agarwal A, Bogart C, Song Y, Sakr M (2023) Can generative pre-trained transformers (GPT) pass assessments in higher education programming courses? arXiv preprint arXiv:2303.09325

  • Shi Y, Bryan C, Bhamidipati S, Zhao Y, Zhang Y, Ma KL (2018) MeetingVis: visual narratives to assist in recalling meeting context and content. IEEE Trans Vis Comput Gr 24(6):1918–1929

    Article  Google Scholar 

  • Shi Y, Wang Y, Qi Y, Chen J, Xu X, Ma KL (2017) IdeaWall: improving creative collaboration through combinatorial visual stimuli. In: Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing, pp 594–603

  • Shneiderman B (1996) The eyes have it: a task by data type taxonomy for information visualizations. In: Proceedings of IEEE symposium on visual languages, IEEE, pp 336–343

  • Sun G, Wu Y, Liu S, Peng TQ, Zhu JJ, Liang R (2014) EvoRiver: visual analysis of topic coopetition on social media. IEEE Trans Vis Comput Gr 20(12):1753–1762

    Article  Google Scholar 

  • Sun G, Zhu Z, Zhang G, Xu C, Wang Y, Zhu S, Chang B, Liang R (2023) Application of mathematical optimization in data visualization and visual analytics: a survey. IEEE Transactions on Big Data. Early Access

  • Sun G, Zhu Z, Zhang G, Xu C, Wang Y, Zhu S, Chang B, Liang R (2023) Application of mathematical optimization in data visualization and visual analytics: a survey. IEEE Trans Big Data. https://doi.org/10.1109/TBDATA.2023.3262151

    Article  Google Scholar 

  • Tafjord O, Mishra BD, Clark P (2022) Entailer: answering questions with faithful and truthful chains of reasoning. arXiv preprint arXiv:2210.12217

  • Tanahashi Y, Ma KL (2012) Design considerations for optimizing storyline visualizations. IEEE Trans Vis Comput Gr 18(12):2679–2688

    Article  Google Scholar 

  • Wang Y, Sun G, Zhu Z, Li T, Chen L, Liang R (2024) E2 storyline: visualizing the relationship with triplet entities and event discovery. ACM Trans Intell Syst Technol 15(1):1–26

    Article  Google Scholar 

  • Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D et al (2022) Emergent abilities of large language models. arXiv preprint arXiv:2206.07682

  • Wu Y, Chen Z, Sun G, Xie X, Cao N, Liu S, Cui W (2018) StreamExplorer: a multi-stage system for visually exploring events in social streams. IEEE Trans Visual Comput Gr 24(10):2758–2772

    Article  Google Scholar 

  • Yang X, Li Y, Zhang X, Chen H, Cheng W (2023) Exploring the limits of ChatGPT for query or aspect-based text summarization. arXiv preprint arXiv:2302.08081

  • Zhou F, Mi J, Zhang B, Shi J, Zhang R, Chen X, Zhao Y, Zhang J (2023) Reliable knowledge graph fact prediction via reinforcement learning. Vis Comput Ind Biomed Art 6(1):21

    Article  Google Scholar 

  • Zhu S, Shen Y, Zhu Z, Xia W, Chang B, Liang R, Sun G (2022) VAC\(^2\): visual analysis of combined causality in event sequences. arXiv preprint arXiv:2206.05420

  • Zhu Z, Shen Y, Zhu S, Zhang G, Liang R, Sun G (2023) Towards better pattern enhancement in temporal evolving set visualization. J Vis 26(3):611–629

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by Zhejiang Provincial Natural Science Foundation of China (LR23F020003 and LTGG23F020005) and National Natural Science Foundation of China (62372411 and 62036009). Furthermore, thanks to the participants for taking part in the requirement analysis and evaluation. Guodao Sun is the corresponding author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guodao Sun.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (MP4 17454 KB)

Supplementary file2 (PDF 250 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, P., Ye, D., Zhu, Z. et al. C5: toward better conversation comprehension and contextual continuity for ChatGPT. J Vis (2024). https://doi.org/10.1007/s12650-024-00980-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12650-024-00980-4

Keywords

Navigation