Collection

Large Language Models and AI-Generated Content

Recent years have witnessed rapid and remarkable progress made in large language models (LLMs), e.g., ChatGPT, GPT-4, BARD, Claude, etc. The emerging LLMs not only revolutionize the field of natural language processing, but also have a transformative impact on AI, science and society. On the one hand, LLMs are used as backbone models for generative AI, enabling AI-generated content (AIGC) in a variety of forms, e.g., text, image, video, audio. On the other hand, challenges coexist with opportunities in LLMs. Due to the blackbox nature of LLMs, the theoretical reasons of emergent abilities, chain-of-thought capability, instruction generalization, etc., are not yet clear. Value alignment of LLMs, which handles ethical concerns regarding different aspects of LLMs and makes them safe, is both a societal and technological desideratum.

This Topical Collection aims at soliciting articles on large language models/AI-generated content via either large language models or other technologies. Topics of interest include, but are not limited to:

- Data processing and governance for large language models/AIGC

- Deep analysis of capabilities of large language models

- Interpretable large language models

- Neural architectures for large language models/AIGC

- Large multimodal models/AIGC

- Training and inference algorithms for large language models/AIGC

- Approaches to the alignment of large language models, e.g., RLHF

- Ethics issues of large language models/AIGC

- Various applications of LLMs in AIGC or for social good

- Automatic/Human Evaluations of LLMs/AIGC

Keywords: Large Language Models, AI-Generated Content, AI Alignment, Natural Language Processing, Ethics Issues of LLMs, LLM Evaluation, LLM Interpretability

Editors

  • Deyi Xiong

    Prof. Deyi Xiong, Computer Science at Tianjin University (TJU), China He is Director of both the Natural Language Processing Laboratory and the International Joint Research Center of Language Intelligence and Technology at TJU. His research focuses on NLP, specifically machine translation, dialogue, LLMs, with over 100 papers published in prestigious journals/conferences. He was the program co-chair of IALP 2021 & CWMT 2017, an area chair of conferences like ACL, the founder and co-organizer of multiple ACL/EMNLP-affiliated workshops. He is an action editor of TACL and EBM of International Journal of Asian Language Processing.

  • Hongfei Xu

    Dr. Hongfei Xu, Zhengzhou University (ZZU), China He is a lecturer of software engineering at ZZU. He obtains his (summa cum laude) PhD degree at Saarland University, Germany. His research focuses on natural language processing, with particular interests in machine translation, neural model architectures, training techniques of neural models, and low-resource NLP. He has published more than 10 papers on top-tier conferences, like IJCAI, ACL, EMNLP, NAACL and COLING. He was the session chair of IJCAI 2020 and IALP 2021. He has also served as a program committee member of conferences including IJCAI, AAAI, ACL, EMNLP, NAACL and COLING.

  • Josef van Genabith

    Prof. Josef van Genabith, Scientific Directors, German Research Centre for Artificial Intelligence (DFKI), Saarland University, Germany He leads the Multilingual Language Technologies (MLT) Lab at DFKI. He is also Professor at Saarland University, Chair of Translation Oriented Language Technologies. His research focuses on Natural Language Processing and Machine Translation. He co-authored more than 200 research papers published in journals including Computational Linguistics, Machine Translation, Computer Speech and Language, Natural Language Engineering and conferences including ACL, EMNLP, NAACL, and COLING.

Articles (2 in this collection)